当前期刊: arXiv - CS - Computational Geometry Go to current issue    加入关注   
显示样式:        排序: 导出
  • Stability of higher-dimensional interval decomposable persistence modules
    arXiv.cs.CG Pub Date : 2016-09-07
    Håvard Bakke Bjerkevik

    The algebraic stability theorem for $\mathbb{R}$-persistence modules is a fundamental result in topological data analysis. We present a stability theorem for $n$-dimensional rectangle decomposable persistence modules up to a constant $(2n-1)$ that is a generalization of the algebraic stability theorem, and also has connections to the complexity of calculating the interleaving distance. The proof given reduces to a new proof of the algebraic stability theorem with $n=1$. We give an example to show that the bound cannot be improved for $n=2$. We apply the same technique to prove stability results for zigzag modules and Reeb graphs, reducing the previously known bounds to a constant that cannot be improved, settling these questions.

  • Meshfree Methods on Manifolds for Hydrodynamic Flows on Curved Surfaces: A Generalized Moving Least-Squares (GMLS) Approach
    arXiv.cs.CG Pub Date : 2019-05-24
    B. J. Gross; N. Trask; P. Kuberry; P. J. Atzberger

    We utilize generalized moving least squares (GMLS) to develop meshfree techniques for discretizing hydrodynamic flow problems on manifolds. We use exterior calculus to formulate incompressible hydrodynamic equations in the Stokesian regime and handle the divergence-free constraints via a generalized vector potential. This provides less coordinate-centric descriptions and enables the development of efficient numerical methods and splitting schemes for the fourth-order governing equations in terms of a system of second-order elliptic operators. Using a Hodge decomposition, we develop methods for manifolds having spherical topology. We show the methods exhibit high-order convergence rates for solving hydrodynamic flows on curved surfaces. The methods also provide general high-order approximations for the metric, curvature, and other geometric quantities of the manifold and associated exterior calculus operators. The approaches also can be utilized to develop high-order solvers for other scalar-valued and vector-valued problems on manifolds.

  • Simplification of Indoor Space Footprints
    arXiv.cs.CG Pub Date : 2020-01-15
    Joon-Seok KimGeorge Mason University; Carola WenkTulane University

    Simplification is one of the fundamental operations used in geoinformation science (GIS) to reduce size or representation complexity of geometric objects. Although different simplification methods can be applied depending on one's purpose, a simplification that many applications employ is designed to preserve their spatial properties after simplification. This article addresses one of the 2D simplification methods, especially working well on human-made structures such as 2D footprints of buildings and indoor spaces. The method simplifies polygons in an iterative manner. The simplification is segment-wise and takes account of intrusion, extrusion, offset, and corner portions of 2D structures preserving its dominant frame.

  • Arrangements of Pseudocircles: On Circularizability
    arXiv.cs.CG Pub Date : 2017-12-06
    Stefan Felsner; Manfred Scheucher

    An arrangement of pseudocircles is a collection of simple closed curves on the sphere or in the plane such that any two of the curves are either disjoint or intersect in exactly two crossing points. We call an arrangement intersecting if every pair of pseudocircles intersects twice. An arrangement is circularizable if there is a combinatorially equivalent arrangement of circles. In this paper we present the results of the first thorough study of circularizability. We show that there are exactly four non-circularizable arrangements of 5 pseudocircles (one of them was known before). In the set of 2131 digon-free intersecting arrangements of 6 pseudocircles we identify the three non-circularizable examples. We also show non-circularizability of 8 additional arrangements of 6 pseudocircles which have a group of symmetries of size at least 4. Most of our non-circularizability proofs depend on incidence theorems like Miquel's. In other cases we contradict circularizability by considering a continuous deformation where the circles of an assumed circle representation grow or shrink in a controlled way. The claims that we have all non-circularizable arrangements with the given properties are based on a program that generated all arrangements up to a certain size. Given the complete lists of arrangements, we used heuristics to find circle representations. Examples where the heuristics failed were examined by hand.

  • Quantifying Genetic Innovation: Mathematical Foundations for the Topological Study of Reticulate Evolution
    arXiv.cs.CG Pub Date : 2018-04-03
    Michael Lesnick; Raúl Rabadán; Daniel I. S. Rosenbloom

    A topological approach to the study of genetic recombination, based on persistent homology, was introduced by Chan, Carlsson, and Rabad\'an in 2013. This associates a sequence of signatures called barcodes to genomic data sampled from an evolutionary history. In this paper, we develop theoretical foundations for this approach. First, we present a novel formulation of the underlying inference problem. Specifically, we introduce and study the novelty profile, a simple, stable statistic of an evolutionary history which not only counts recombination events but also quantifies how recombination creates genetic diversity. We propose that the (hitherto implicit) goal of the topological approach to recombination is the estimation of novelty profiles. We then study the problem of obtaining a lower bound on the novelty profile using barcodes. We focus on a low-recombination regime, where the evolutionary history can be described by a directed acyclic graph called a galled tree, which differs from a tree only by isolated topological defects. We show that in this regime, under a complete sampling assumption, the $1^\mathrm{st}$ barcode yields a lower bound on the novelty profile, and hence on the number of recombination events. For $i>1$, the $i^{\mathrm{th}}$ barcode is empty. In addition, we use a stability principle to strengthen these results to ones which hold for any subsample of an arbitrary evolutionary history. To establish these results, we describe the topology of the Vietoris--Rips filtrations arising from evolutionary histories indexed by galled trees. As a step towards a probabilistic theory, we also show that for a random history indexed by a fixed galled tree and satisfying biologically reasonable conditions, the intervals of the $1^{\mathrm{st}}$ barcode are independent random variables. Using simulations, we explore the sensitivity of these intervals to recombination.

  • HLO: Half-kernel Laplacian Operator for Surface Smoothing
    arXiv.cs.CG Pub Date : 2019-05-12
    Wei Pan; Xuequan Lu; Yuanhao Gong; Wenming Tang; Jun Liu; Ying He; Guoping Qiu

    This paper presents a simple yet effective method for feature-preserving surface smoothing. Through analyzing the differential property of surfaces, we show that the conventional discrete Laplacian operator with uniform weights is not applicable to feature points at which the surface is non-differentiable and the second order derivatives do not exist. To overcome this difficulty, we propose a Half-kernel Laplacian Operator (HLO) as an alternative to the conventional Laplacian. Given a vertex v, HLO first finds all pairs of its neighboring vertices and divides each pair into two subsets (called half windows); then computes the uniform Laplacians of all such subsets and subsequently projects the computed Laplacians to the full-window uniform Laplacian to alleviate flipping and degeneration. The half window with least regularization energy is then chosen for v. We develop an iterative approach to apply HLO for surface denoising. Our method is conceptually simple and easy to use because it has a single parameter, i.e., the number of iterations for updating vertices. We show that our method can preserve features better than the popular uniform Laplacian-based denoising and it significantly alleviates the shrinkage artifact. Extensive experimental results demonstrate that HLO is better than or comparable to state-of-the-art techniques both qualitatively and quantitatively and that it is particularly good at handling meshes with high noise. We will make our source code publicly available.

  • NP-completeness of slope-constrained drawing of complete graphs
    arXiv.cs.CG Pub Date : 2020-01-14
    Cédric Pilatte

    We prove the NP-completeness of the following problem. Given a set $S$ of $n$ slopes and an integer $k\geq 1$, is it possible to draw a complete graph on $k$ vertices in the plane using only slopes from $S$? Equivalently, does there exist a set $K$ of $k$ points in general position such that the slope of every segment between two points of $K$ is in $S$? We also present a polynomial algorithm for this question when $n\leq 2k-c$, conditionally on a conjecture of R.E. Jamison. For $n=k$, an algorithm in $\mathcal{O}(n^4)$ was proposed by Wade and Chu. In this case, our algorithm is linear and does not rely on Jamison's conjecture.

  • Deciding contractibility of a non-simple curve on the boundary of a 3-manifold: A computational Loop Theorem
    arXiv.cs.CG Pub Date : 2020-01-14
    Éric Colin de Verdière; Salman Parsa

    We present an algorithm for the following problem. Given a triangulated 3-manifold M and a (possibly non-simple) closed curve on the boundary of M, decide whether this curve is contractible in M. Our algorithm runs in space polynomial in the size of the input, and (thus) in exponential time. This is the first algorithm that is specifically designed for this problem; it considerably improves upon the existing bounds implicit in the literature for the more general problem of contractibility of closed curves in a 3-manifold. The proof of the correctness of the algorithm relies on methods of 3-manifold topology and in particular on those used in the proof of the Loop Theorem. As a byproduct, we obtain an algorithmic version of the Loop Theorem that runs in polynomial space, and (thus) in exponential time.

  • Critical Sets of PL and Discrete Morse Theory: a Correspondence
    arXiv.cs.CG Pub Date : 2020-01-14
    Ulderico Fugacci; Claudia Landi; Hanife Varli

    Piecewise-linear (PL) Morse theory and discrete Morse theory are used in shape analysis tasks to investigate the topological features of discretized spaces. In spite of their common origin in smooth Morse theory, various notions of critical points have been given in the literature for the discrete setting, making a clear understanding of the relationships occurring between them not obvious. This paper aims at providing equivalence results about critical points of the two discretized Morse theories. First of all, we prove the equivalence of the existing notions of PL critical points. Next, under an optimality condition called relative perfectness, we show a dimension agnostic correspondence between the set of PL critical points and that of discrete critical simplices of the combinatorial approach. Finally, we show how a relatively perfect discrete gradient vector field can be algorithmically built up to dimension 3. This way, we guarantee a formal and operative connection between critical sets in the PL and discrete theories.

  • Minimax adaptive estimation in manifold inference
    arXiv.cs.CG Pub Date : 2020-01-14
    Vincent Divol

    We focus on the problem of manifold estimation: given a set of observations sampled close to some unknown submanifold $M$, one wants to recover information about the geometry of $M$. Minimax estimators which have been proposed so far all depend crucially on the a priori knowledge of some parameters quantifying the regularity of $M$ (such as its reach), whereas those quantities will be unknown in practice. Our contribution to the matter is twofold: first, we introduce a one-parameter family of manifold estimators $(\hat{M}_t)_{t\geq 0}$, and show that for some choice of $t$ (depending on the regularity parameters), the corresponding estimator is minimax on the class of models of $C^2$ manifolds introduced in [Genovese et al., Manifold estimation and singular deconvolution under Hausdorff loss]. Second, we propose a completely data-driven selection procedure for the parameter $t$, leading to a minimax adaptive manifold estimator on this class of models. The same selection procedure is then used to design adaptive estimators for tangent spaces and homology groups of the manifold $M$.

  • Curved foldings with common creases and crease patterns
    arXiv.cs.CG Pub Date : 2019-10-15
    Atsufumi Honda; Kosuke Naokawa; Kentaro Saji; Masaaki Umehara; Kotaro Yamada

    Consider a curve $\Gamma$ in a domain $D$ in the plane $\boldsymbol R^2$. Thinking of $D$ as a piece of paper, one can make a curved folding $P$ in the Euclidean space $\boldsymbol R^3$. The singular set $C$ of $P$ as a space curve is called the crease of $P$ and the initially given plane curve $\Gamma$ is called the crease pattern of $P$. In this paper, we show that in general there are four distinct non-congruent curved foldings with a given pair consisting of a crease and crease pattern. Two of these possibilities were already known, but it seems that the other two possibilities (i.e. four possibilities in total) are presented here for the first time.

  • Persistent Homology as Stopping-Criterion for Voronoi Interpolation
    arXiv.cs.CG Pub Date : 2019-11-08
    Luciano Melodia; Richard Lenz

    In this study the Voronoi interpolation is used to interpolate a set of points drawn from a topological space with higher homology groups on its filtration. The technique is based on Voronoi tesselation, which induces a natural dual map to the Delaunay triangulation. Advantage is taken from this fact calculating the persistent homology on it after each iteration to capture the changing topology of the data. The boundary points are identified as critical. The Bottleneck and Wasserstein distance serve as a measure of quality between the original point set and the interpolation. If the norm of two distances exceeds a heuristically determined threshold, the algorithm terminates. We give the theoretical basis for this approach and justify its validity with numerical experiments.

  • Obtaining a Canonical Polygonal Schema from a Greedy Homotopy Basis with Minimal Mesh Refinement
    arXiv.cs.CG Pub Date : 2020-01-10
    Marco Livesu

    Any closed manifold of genus g can be cut open to form a topological disk and then mapped to a regular polygon with 4g sides. This construction is called the canonical polygonal schema of the manifold, and is a key ingredient for many applications in graphics and engineering, where a parameterization between two shapes with same topology is often needed. The sides of the 4g-gon define on the manifold a system of loops, which all intersect at a single point and are disjoint elsewhere. Computing a shortest system of loops of this kind is NP-hard. A computationally tractable alternative consists in computing a set of shortest loops that are not fully disjoint in polynomial time, using the greedy homotopy basis algorithm proposed by Erickson and Whittlesey, and then detach them in post processing via mesh refinement. Despite this operation is conceptually simple, known refinement strategies do not scale well for high genus shapes, triggering a mesh growth that may exceed the amount of memory available in modern computers, leading to failures. In this paper we study various local refinement operators to detach cycles in a system of loops, and show that there are important differences between them, both in terms of mesh complexity and preservation of the original surface. We ultimately propose two novel refinement approaches: the former minimizes the number of new elements in the mesh, possibly at the cost of a deviation from the input geometry. The latter allows to trade mesh complexity for geometric accuracy, bounding deviation from the input surface. Both strategies are trivial to implement, and experiments confirm that they allow to realize canonical polygonal schemas even for extremely high genus shapes where previous methods fail.

  • Evaluating the snappability of bar-joint frameworks
    arXiv.cs.CG Pub Date : 2020-01-13
    Georg Nawratil

    It is well-known that there exist bar-joint frameworks (without continuous flexions) whose physical models can snap between different realizations due to non-destructive elastic deformations of material. We present a method to measure these snapping capability -- shortly called snappability -- based on the total elastic strain energy of the framework by computing the deformation of all bars using Hook's law. The presented theoretical results give further connections between shakiness and snapping beside the well-known technique of averaging and deaveraging.

  • Scattering and Sparse Partitions, and their Applications
    arXiv.cs.CG Pub Date : 2020-01-13
    Arnold Filtser

    A partition $\mathcal{P}$ of a weighted graph $G$ is $(\sigma,\tau,\Delta)$-sparse if every cluster has diameter at most $\Delta$, and every ball of radius $\Delta/\sigma$ intersects at most $\tau$ clusters. Similarly, $\mathcal{P}$ is $(\sigma,\tau,\Delta)$-scattering if instead for balls we require that every shortest path of length at most $\Delta/\sigma$ intersects at most $\tau$ clusters. Given a graph $G$ that admits a $(\sigma,\tau,\Delta)$-sparse partition for all $\Delta>0$, Jia et al. [STOC05] constructed a solution for the Universal Steiner Tree problem (and also Universal TSP) with stretch $O(\tau\sigma^2\log_\tau n)$. Given a graph $G$ that admits a $(\sigma,\tau,\Delta)$-scattering partition for all $\Delta>0$, we construct a solution for the Steiner Point Removal problem with stretch $O(\tau^3\sigma^3)$. We then construct sparse and scattering partitions for various different graph families, receiving many new results for the Universal Steiner Tree and Steiner Point Removal problems.

  • Čech-Delaunay gradient flow and homology inference for self-maps
    arXiv.cs.CG Pub Date : 2017-09-12
    Ulrich Bauer; Herbert Edelsbrunner; Grzegorz Jablonski; Marian Mrozek

    We call a continuous self-map that reveals itself through a discrete set of point-value pairs a sampled dynamical system. Capturing the available information with chain maps on Delaunay complexes, we use persistent homology to quantify the evidence of recurrent behavior. We establish a sampling theorem to recover the eigenspace of the endomorphism on homology induced by the self-map. Using a combinatorial gradient flow arising from the discrete Morse theory for \v{C}ech and Delaunay complexes, we construct a chain map to transform the problem from the natural but expensive \v{C}ech complexes to the computationally efficient Delaunay triangulations. The fast chain map algorithm has applications beyond dynamical systems.

  • Using Persistent Homology to Quantify a Diurnal Cycle in Hurricane Felix
    arXiv.cs.CG Pub Date : 2019-02-17
    Sarah Tymochko; Elizabeth Munch; Jason Dunion; Kristen Corbosiero; Ryan Torn

    The diurnal cycle of tropical cyclones (TCs) is a daily cycle in clouds that appears in satellite images and may have implications for TC structure and intensity. The diurnal pattern can be seen in infrared (IR) satellite imagery as cyclical pulses in the cloud field that propagate radially outward from the center of nearly all Atlantic-basin TCs. These diurnal pulses, a distinguishing characteristic of the TC diurnal cycle, begin forming in the storm's inner core near sunset each day and appear as a region of cooling cloud-top temperatures. The area of cooling takes on a ring-like appearance as cloud-top warming occurs on its inside edge and the cooling moves away from the storm overnight, reaching several hundred kilometers from the circulation center by the following afternoon. The state-of-the-art TC diurnal cycle measurement has a limited ability to analyze the behavior beyond qualitative observations. We present a method for quantifying the TC diurnal cycle using one-dimensional persistent homology, a tool from Topological Data Analysis, by tracking maximum persistence and quantifying the cycle using the discrete Fourier transform. Using Geostationary Operational Environmental Satellite IR imagery data from Hurricane Felix (2007), our method is able to detect an approximate daily cycle.

  • The Very Best of Perfect Non-crossing Matchings
    arXiv.cs.CG Pub Date : 2020-01-09
    Ioannis Mantas; Marko Savić; Hendrik Schrezenmaier

    Given a set of points in the plane, we are interested in matching them with straight line segments. We focus on perfect (all points are matched) non-crossing (no two edges intersect) matchings. Apart from the well known MinMax variation, where the length of the longest edge is minimized, we extend work by looking into different optimization variants such as MaxMin, MinMin, and MaxMax. We consider both the monochromatic and bichromatic versions of these problems and provide efficient algorithms for various input point configurations.

  • An inexact matching approach for the comparison of plane curves with general elastic metrics
    arXiv.cs.CG Pub Date : 2020-01-09
    Yashil Sukurdeep; Martin Bauer; Nicolas Charon

    This paper introduces a new mathematical formulation and numerical approach for the computation of distances and geodesics between immersed planar curves. Our approach combines the general simplifying transform for first-order elastic metrics that was recently introduced by Kurtek and Needham, together with a relaxation of the matching constraint using parametrization-invariant fidelity metrics. The main advantages of this formulation are that it leads to a simple optimization problem for discretized curves, and that it provides a flexible approach to deal with noisy, inconsistent or corrupted data. These benefits are illustrated via a few preliminary numerical results.

  • RAC Drawings in Subcubic Area
    arXiv.cs.CG Pub Date : 2020-01-09
    Zahed Rahmati; Fatemeh Emami

    In this paper, we study tradeoffs between curve complexity and area of Right Angle Crossing drawings (RAC drawings), which is a challenging theoretical problem in graph drawing. Given a graph with $n$ vertices and $m$ edges, we provide a RAC drawing algorithm with curve complexity $6$ and area $O(n^{2.75})$, which takes time $O(n+m)$. Our algorithm improves the previous upper bound $O(n^3)$, by Di Giacomo et al., on the area of RAC drawings.

  • Computing Persistent Homology with Various Coefficient Fields in a Single Pass
    arXiv.cs.CG Pub Date : 2020-01-09
    Jean-Daniel BoissonnatDATASHAPE, Inria; Clément MariaDATASHAPE, Inria

    This article introduces an algorithm to compute the persistent homology of a filtered complex with various coefficient fields in a single matrix reduction. The algorithm is output-sensitive in the total number of distinct persistent homological features in the diagrams for the different coefficient fields. This computation allows us to infer the prime divisors of the torsion coefficients of the integral homology groups of the topological space at any scale, hence furnishing a more informative description of topology than persistence in a single coefficient field. We provide theoretical complexity analysis as well as detailed experimental results. The code is part of the Gudhi software library.

  • A General Theory of Equivariant CNNs on Homogeneous Spaces
    arXiv.cs.CG Pub Date : 2018-11-05
    Taco Cohen; Mario Geiger; Maurice Weiler

    We present a general theory of Group equivariant Convolutional Neural Networks (G-CNNs) on homogeneous spaces such as Euclidean space and the sphere. Feature maps in these networks represent fields on a homogeneous base space, and layers are equivariant maps between spaces of fields. The theory enables a systematic classification of all existing G-CNNs in terms of their symmetry group, base space, and field type. We also consider a fundamental question: what is the most general kind of equivariant linear map between feature spaces (fields) of given types? Following Mackey, we show that such maps correspond one-to-one with convolutions using equivariant kernels, and characterize the space of such kernels.

  • Distributions of Matching Distances in Topological Data Analysis
    arXiv.cs.CG Pub Date : 2018-12-29
    So Mang Han; Taylor Okonek; Nikesh Yadav; Xiaojun Zheng

    In topological data analysis, we want to discern topological and geometric structure of data, and to understand whether or not certain features of data are significant as opposed to simply random noise. While progress has been made on statistical techniques for single-parameter persistence, the case of two-parameter persistence, which is highly desirable for real-world applications, has been less studied. This paper provides an accessible introduction to two-parameter persistent homology and presents results about matching distance between 2-D persistence modules obtained from families of point clouds. Results include observations of how differences in geometric structure of point clouds affect the matching distance between persistence modules. We offer these results as a starting point for the investigation of more complex data.

  • Quantitative Comparison of Time-Dependent Treemaps
    arXiv.cs.CG Pub Date : 2019-06-14
    Eduardo Vernier; Max Sondag; Joao Comba; Bettina Speckmann; Alexandru Telea; Kevin Verbeek

    Rectangular treemaps are often the method of choice to visualize large hierarchical datasets. Nowadays such datasets are available over time, hence there is a need for (a) treemaps that can handle time-dependent data, and (b) corresponding quality criteria that cover both a treemap's visual quality and its stability over time. In recent years a wide variety of (stable) treemapping algorithms has been proposed, with various advantages and limitations. We aim to provide insights to researchers and practitioners to allow them to make an informed choice when selecting a treemapping algorithm for specific applications and data. To this end, we perform an extensive quantitative evaluation of rectangular treemaps for time-dependent data. As part of this evaluation we propose a novel classification scheme for time-dependent datasets. Specifically, we observe that the performance of treemapping algorithms depends on the characteristics of the datasets used. We identify four potential representative features that characterize time-dependent hierarchical datasets and classify all datasets used in our experiments accordingly. We experimentally test the validity of this classification on more than 2000 datasets, and analyze the relative performance of 14 state-of-the-art rectangular treemapping algorithms across varying features. Finally, we visually summarize our results with respect to both visual quality and stability to aid users in making an informed choice among treemapping algorithms. All datasets, metrics, and algorithms are openly available to facilitate reuse and further comparative studies.

  • Optimal Joins using Compact Data Structures
    arXiv.cs.CG Pub Date : 2019-08-05
    Gonzalo Navarro; Juan L. Reutter; Javiel Rojas-Ledesma

    Worst-case optimal join algorithms have gained a lot of attention in the database literature. We now count with several algorithms that are optimal in the worst case, and many of them have been implemented and validated in practice. However, the implementation of these algorithms often requires an enhanced indexing structure: to achieve optimality we either need to build completely new indexes, or we must populate the database with several instantiations of indexes such as B$+$-trees. Either way, this means spending an extra amount of storage space that may be non-negligible. We show that optimal algorithms can be obtained directly from a representation that regards the relations as point sets in variable-dimensional grids, without the need of extra storage. Our representation is a compact quad tree for the static indexes, and a dynamic quadtree sharing subtrees (which we dub a qdag) for intermediate results. We develop a compositional algorithm to process full join queries under this representation, and show that the running time of this algorithm is worst-case optimal in data complexity. Remarkably, we can extend our framework to evaluate more expressive queries from relational algebra by introducing a lazy version of qdags (lqdags). Once again, we can show that the running time of our algorithms is worst-case optimal.

  • Automatic surface mesh generation for discrete models: A complete and automatic pipeline based on reparameterization
    arXiv.cs.CG Pub Date : 2020-01-08
    Pierre-Alexandre Beaufort; Christophe Geuzaine; Jean-François Remacle

    Triangulations are an ubiquitous input for the finite element community. However, most raw triangulations obtained by imaging techniques are unsuitable as-is for finite element analysis. In this paper, we give a robust pipeline for handling those triangulations, based on the computation of a one-to-one parametrization for automatically selected patches of input triangles, which makes each patch amenable to remeshing by standard finite element meshing algorithms. Using only geometrical arguments, we prove that a discrete parametrization of a patch is one-to-one if (and only if) its image in the parameter space is such that all parametric triangles have a positive area. We then derive a non-standard linear discretization scheme based on mean value coordinates to compute such one-to-one parametrizations, and show that the scheme does not discretize a Laplacian on a structured mesh. The proposed pipeline is implemented in the open source mesh generator Gmsh, where the creation of suitable patches is based on triangulation topology and parametrization quality, combined with feature edge detection. Several examples illustrate the robustness of the resulting implementation.

  • The Simplex Tree: an Efficient Data Structure for General Simplicial Complexes
    arXiv.cs.CG Pub Date : 2020-01-08
    Jean-Daniel BoissonnatDATASHAPE, Inria; Clément MariaDATASHAPE, Inria

    This paper introduces a data structure, called simplex tree, to represent abstract simplicial complexes of any dimension. All faces of the simplicial complex are explicitly stored in a trie whose nodes are in bijection with the faces of the complex. This data structure allows to efficiently implement a large range of basic operations on simplicial complexes. We provide theoretical complexity analysis as well as detailed experimental results. We more specifically study Rips and witness complexes.

  • The Compressed Annotation Matrix: an Efficient Data Structure for Computing Persistent Cohomology
    arXiv.cs.CG Pub Date : 2013-04-25
    Jean-Daniel BoissonnatDATASHAPE; Tamal K. DeyOSU; Clément MariaDATASHAPE

    The persistent homology with coefficients in a field F coincides with the same for cohomology because of duality. We propose an implementation of a recently introduced algorithm for persistent cohomology that attaches annotation vectors with the simplices. We separate the representation of the simplicial complex from the representation of the cohomology groups, and introduce a new data structure for maintaining the annotation matrix, which is more compact and reduces substancially the amount of matrix operations. In addition, we propose heuristics to simplify further the representation of the cohomology groups and improve both time and space complexities. The paper provides a theoretical analysis, as well as a detailed experimental study of our implementation and comparison with state-of-the-art software for persistent homology and cohomology.

  • Spatial Applications of Topological Data Analysis: Cities, Snowflakes, Random Structures, and Spiders Spinning Under the Influence
    arXiv.cs.CG Pub Date : 2020-01-07
    Michelle Feng; Mason A. Porter

    Spatial networks are ubiquitous in social, geographic, physical, and biological applications. To understand their large-scale structure, it is important to develop methods that allow one to directly probe the effects of space on structure and dynamics. Historically, algebraic topology has provided one framework for rigorously and quantitatively describing the global structure of a space, and recent advances in topological data analysis (TDA) have given scholars a new lens for analyzing network data. In this paper, we study a variety of spatial networks --- including both synthetic and natural ones --- using novel topological methods that we recently developed specifically for analyzing spatial networks. We demonstrate that our methods are able to capture meaningful quantities, with specifics that depend on context, in spatial networks and thereby provide useful insights into the structure of those networks, including a novel approach for characterizing them based on their topological structures. We illustrate these ideas with examples of synthetic networks and dynamics on them, street networks in cities, snowflakes, and webs spun by spiders under the influence of various psychotropic substances.

  • Deep Iterative Surface Normal Estimation
    arXiv.cs.CG Pub Date : 2019-04-15
    Jan Eric Lenssen; Christian Osendorfer; Jonathan Masci

    This paper presents an end-to-end differentiable algorithm for robust and detail-preserving surface normal estimation on unstructured point-clouds. We utilize graph neural networks to iteratively parameterize an adaptive anisotropic kernel that produces point weights for weighted least-squares plane fitting in local neighborhoods. The approach retains the interpretability and efficiency of traditional sequential plane fitting while benefiting from adaptation to data set statistics through deep learning. This results in a state-of-the-art surface normal estimator that is robust to noise, outliers and point density variation, preserves sharp features through anisotropic kernels and equivariance through a local quaternion-based spatial transformer. Contrary to previous deep learning methods, the proposed approach does not require any hand-crafted features or preprocessing. It improves on the state-of-the-art results while being more than two orders of magnitude faster and more parameter efficient.

  • Computing Euclidean k-Center over Sliding Windows
    arXiv.cs.CG Pub Date : 2020-01-04
    Sang-Sub Kim

    In the Euclidean $k$-center problem in sliding window model, input points are given in a data stream and the goal is to find the $k$ smallest congruent balls whose union covers the $N$ most recent points of the stream. In this model, input points are allowed to be examined only once and the amount of space that can be used to store relative information is limited. Cohen-Addad et al.~\cite{cohen-2016} gave a $(6+\epsilon)$-approximation for the metric $k$-center problem using O($k/\epsilon \log \alpha$) points, where $\alpha$ is the ratio of the largest and smallest distance and is assumed to be known in advance. In this paper, we present a $(3+\epsilon)$-approximation algorithm for the Euclidean $1$-center problem using O($1/\epsilon \log \alpha$) points. We present an algorithm for the Euclidean $k$-center problem that maintains a coreset of size $O(k)$. Our algorithm gives a $(c+2\sqrt{3} + \epsilon)$-approximation for the Euclidean $k$-center problem using O($k/\epsilon \log \alpha$) points by using any given $c$-approximation for the coreset where $c$ is a positive real number. For example, by using the $2$-approximation~\cite{feder-greene-1988} of the coreset, our algorithm gives a $(2+2\sqrt{3} + \epsilon)$-approximation ($\approx 5.465$) using $O(k\log k)$ time. This is an improvement over the approximation factor of $(6+\epsilon)$ by Cohen-Addad et al.~\cite{cohen-2016} with the same space complexity and smaller update time per point. Moreover we remove the assumption that $\alpha$ is known in advance. Our idea can be adapted to the metric diameter problem and the metric $k$-center problem to remove the assumption. For low dimensional Euclidean space, we give an approximation algorithm that guarantees an even better approximation.

  • Non-Convex Planar Harmonic Maps
    arXiv.cs.CG Pub Date : 2020-01-05
    Shahar Z. Kovalsky; Noam Aigerman; Ingrid Daubechies; Michael Kazhdan; Jianfeng Lu; Stefan Steinerberger

    We formulate a novel characterization of a family of invertible maps between two-dimensional domains. Our work follows two classic results: The Rad\'o-Kneser-Choquet (RKC) theorem, which establishes the invertibility of harmonic maps into a convex planer domain; and Tutte's embedding theorem for planar graphs - RKC's discrete counterpart - which proves the invertibility of piecewise linear maps of triangulated domains satisfying a discrete-harmonic principle, into a convex planar polygon. In both theorems, the convexity of the target domain is essential for ensuring invertibility. We extend these characterizations, in both the continuous and discrete cases, by replacing convexity with a less restrictive condition. In the continuous case, Alessandrini and Nesi provide a characterization of invertible harmonic maps into non-convex domains with a smooth boundary by adding additional conditions on orientation preservation along the boundary. We extend their results by defining a condition on the normal derivatives along the boundary, which we call the cone condition; this condition is tractable and geometrically intuitive, encoding a weak notion of local invertibility. The cone condition enables us to extend Alessandrini and Nesi to the case of harmonic maps into non-convex domains with a piecewise-smooth boundary. In the discrete case, we use an analog of the cone condition to characterize invertible discrete-harmonic piecewise-linear maps of triangulations. This gives an analog of our continuous results and characterizes invertible discrete-harmonic maps in terms of the orientation of triangles incident on the boundary.

  • Advice Complexity of Treasure Hunt in Geometric Terrains
    arXiv.cs.CG Pub Date : 2018-11-16
    Andrzej Pelc; Ram Narayan Yadav

    Treasure hunt is the task of finding an inert target by a mobile agent in an unknown environment. We consider treasure hunt in geometric terrains with obstacles. Both the terrain and the obstacles are modeled as polygons and both the agent and the treasure are modeled as points. The agent navigates in the terrain, avoiding obstacles, and finds the treasure when there is a segment of length at most 1 between them, unobstructed by the boundary of the terrain or by the obstacles. The cost of finding the treasure is the length of the trajectory of the agent. We investigate the amount of information that the agent needs a priori in order to find the treasure at cost $O(L)$, where $L$ is the length of the shortest path in the terrain from the initial position of the agent to the treasure, avoiding obstacles. Following the paradigm of algorithms with advice, this information is given to the agent in advance as a binary string, by an oracle cooperating with the agent and knowing the whole environment: the terrain, the position of the treasure and the initial position of the agent. Advice complexity of treasure hunt is the minimum length of the advice string (up to multiplicative constants) that enables the agent to find the treasure at cost $O(L)$. We first consider treasure hunt in regular terrains which are defined as convex polygons with convex $c$-fat obstacles, for some constant $c>1$. A polygon is $c$-fat if the ratio of the radius of the smallest disc containing it to the radius of the largest disc contained in it is at most $c$. For the class of regular terrains, we establish the exact advice complexity of treasure hunt. We then show that advice complexity of treasure hunt for the class of arbitrary terrains (even for non-convex polygons without obstacles, and even for those with only horizontal or vertical sides) is exponentially larger than for regular terrains.

  • Betti numbers of unordered configuration spaces of small graphs
    arXiv.cs.CG Pub Date : 2019-06-03
    Gabriel C. Drummond-Cole

    The purpose of this document is to provide data about known Betti numbers of unordered configuration spaces of small graphs in order to guide research and avoid duplicated effort. It contains information for connected multigraphs having at most nine edges which contain no loops, no bivalent vertices, and no internal (i.e., non-leaf) bridges.

  • On the edge-length ratio of 2-trees
    arXiv.cs.CG Pub Date : 2019-09-24
    Václav Blažej; Jiří Fiala; Giuseppe Liotta

    We study planar straight-line drawings of graphs that minimize the ratio between the length of the longest and the shortest edge. We answer a question of Lazard et al. [Theor. Comput. Sci. {\bf 770} (2019), 88--94] and provide a 2-tree which does not allow any drawing with such bounded ratio. On the other hand, when the ratio is restricted to adjacent edges only, we provide a procedure that for any 2-tree yields a drawing with the ratio arbitrarily close to 4.

  • Computing cross fields -- A PDE approach based on the Ginzburg-Landau theory
    arXiv.cs.CG Pub Date : 2017-06-02
    Pierre-Alexandre Beaufort; Christos Georgiadis Jonathan Lambrechts; François Henrotte; Christophe Geuzaine; Jean-François Remacle

    This paper proposes a method to compute crossfields based on the Ginzburg-Landau theory. The Ginzburg-Landau functional has two terms: the Dirichlet energy of the distribution and a term penalizing the mismatch between the fixed and actual norm of the distribution. Directional fields on surfaces are known to have a number of critical points, which are properly identified with the Ginzburg-Landau approach: the asymptotic behavior of Ginzburg-Landau problem provides well-distributed critical points over the 2-manifold, whose indices are as low as possible. The central idea in this paper is to exploit this theoretical background for crossfield computation on arbitrary surfaces. Such crossfields are instrumental in the generation of meshes with quadrangular elements. The relation between the topological properties of quadrangular meshes and crossfields are hence first recalled. It is then shown that a crossfield on a surface can be represented by a complex function of unit norm with a number of critical points, i.e., a nearly everywhere smooth function taking its values in the unit circle of the complex plane. As maximal smoothness of the crossfield is equivalent with minimal energy, the crossfield problem is equivalent to an optimization problem based on Ginzburg-Landau functional. A discretization scheme with Crouzeix-Raviart elements is applied and the correctness of the resulting finite element formulation is validated on the unit disk by comparison with an analytical solution. The method is also applied to the 2-sphere where, surprisingly but rightly, the computed critical points are not located at the vertices of a cube, but at those of an anticube.

  • Upper bounds for stabbing simplices by a line
    arXiv.cs.CG Pub Date : 2020-01-03
    Inbar Daum-Sadon; Gabriel Nivasch

    It is known that for every dimension $d\ge 2$ and every $k0$ such that for every $n$-point set $X\subset \mathbb R^d$ there exists a $k$-flat that intersects at least $c_{d,k} n^{d+1-k} - o(n^{d+1-k})$ of the $(d-k)$-dimensional simplices spanned by $X$. However, the optimal values of the constants $c_{d,k}$ are mostly unknown. The case $k=0$ (stabbing by a point) has received a great deal of attention. In this paper we focus on the case $k=1$ (stabbing by a line). Specifically, we try to determine the upper bounds yielded by two point sets, known as the "stretched grid" and the "stretched diagonal". Even though the calculations are independent of $n$, they are still very complicated, so we resort to analytical and numerical software methods. Surprisingly, for $d=4,5,6$ the stretched grid yields better bounds than the stretched diagonal (unlike for all cases $k=0$ and for the case $(d,k)=(3,1)$, in which both point sets yield the same bound). The stretched grid yields $c_{4,1}\leq 0.00457936$, $c_{5,1}\leq 0.000405335$, and $c_{6,1}\leq 0.0000291323$.

  • A polynomial-time partitioning algorithm for weighted cactus graphs
    arXiv.cs.CG Pub Date : 2020-01-01
    Maike Buchin; Leonie Selbach

    Partitioning problems where the goal is to partition a graph into connected components are known to be NP-hard for some graph classes but polynomial-time solvable for others. We consider different variants of the $(l,u)$-partition problem: the $p$-$(l,u)$-partition problem and the minimum resp. maximum $(l,u)$-partition problem. That is partitioning a graph into exactly $p$ or a minimal resp. maximal number of clusters such that every cluster fulfills the following weight condition: the weight of each cluster should to be at least $l$ and at most $u$. These kind of partitioning problems are NP-hard for series-parallel graphs but can be solved in polynomial time on trees. In this paper we present partitioning algorithms for cactus graphs to show that these partition problems are polynomial-time solvable for this graph class as well.

Contents have been reproduced by permission of the publishers.
上海纽约大学William Glover