当前期刊: arXiv - CS - Graphics Go to current issue    加入关注   
显示样式:        排序: IF: - GO 导出
  • Learning Generative Models of Shape Handles
    arXiv.cs.GR Pub Date : 2020-04-06
    Matheus Gadelha; Giorgio Gori; Duygu Ceylan; Radomir Mech; Nathan Carr; Tamy Boubekeur; Rui Wang; Subhransu Maji

    We present a generative model to synthesize 3D shapes as sets of handles -- lightweight proxies that approximate the original 3D shape -- for applications in interactive editing, shape parsing, and building compact 3D representations. Our model can generate handle sets with varying cardinality and different types of handles (Figure 1). Key to our approach is a deep architecture that predicts both the

  • Iconify: Converting Photographs into Icons
    arXiv.cs.GR Pub Date : 2020-04-07
    Takuro Karamatsu; Gibran Benitez-Garcia; Keiji Yanai; Seiichi Uchida

    In this paper, we tackle a challenging domain conversion task between photo and icon images. Although icons often originate from real object images (i.e., photographs), severe abstractions and simplifications are applied to generate icon images by professional graphic designers. Moreover, there is no one-to-one correspondence between the two domains, for this reason we cannot use it as the ground-truth

  • Learning to Accelerate Decomposition for Multi-Directional 3D Printing
    arXiv.cs.GR Pub Date : 2020-03-17
    Chenming Wu; Yong-Jin Liu; Charlie C. L. Wang

    As a strong complementary of additive manufacturing, multi-directional 3D printing has the capability of decreasing or eliminating the need for support structures. Recent work proposed a beam-guided search algorithm to find an optimized sequence of plane-clipping, which gives volume decomposition of a given 3D model. Different printing directions are employed in different regions so that a model can

  • 3D Dynamic Point Cloud Inpainting via Temporal Consistency on Graphs
    arXiv.cs.GR Pub Date : 2019-04-23
    Zeqing Fu; Wei Hu; Zongming Guo

    With the development of 3D laser scanning techniques and depth sensors, 3D dynamic point clouds have attracted increasing attention as a representation of 3D objects in motion, enabling various applications such as 3D immersive tele-presence, gaming and navigation. However, dynamic point clouds usually exhibit holes of missing data, mainly due to the fast motion, the limitation of acquisition and complicated

  • Cross-Shape Graph Convolutional Networks
    arXiv.cs.GR Pub Date : 2020-03-20
    Dmitry Petrov; Evangelos Kalogerakis

    We present a method that processes 3D point clouds by performing graph convolution operations across shapes. In this manner, point descriptors are learned by allowing interaction and propagation of feature representations within a shape collection. To enable this form of non-local, cross-shape graph convolution, our method learns a pairwise point attention mechanism indicating the degree of interaction

  • Predicting Novel Views Using Generative Adversarial Query Network
    arXiv.cs.GR Pub Date : 2019-04-10
    Phong Nguyen-Ha; Lam Huynh; Esa Rahtu; Janne Heikkila

    The problem of predicting a novel view of the scene using an arbitrary number of observations is a challenging problem for computers as well as for humans. This paper introduces the Generative Adversarial Query Network (GAQN), a general learning framework for novel view synthesis that combines Generative Query Network (GQN) and Generative Adversarial Networks (GANs). The conventional GQN encodes input

  • Deformation-Aware 3D Model Embedding and Retrieval
    arXiv.cs.GR Pub Date : 2020-04-02
    Mikaela Angelina Uy; Jingwei Huang; Minhyuk Sung; Tolga Birdal; Leonidas Guibas

    We introduce a new problem of $\textit{retrieving}$ 3D models that are not just similar but are deformable to a given query shape. We then present a novel deep $\textit{deformation-aware}$ embedding to solve this retrieval task. 3D model retrieval is a fundamental operation for recovering a clean and complete 3D model from a noisy and partial 3D scan. However, given a finite collection of 3D shapes

  • Intrinsic Point Cloud Interpolation via Dual Latent Space Navigation
    arXiv.cs.GR Pub Date : 2020-04-03
    Marie-Julie Rakotosaona; Maks Ovsjanikov

    We present a learning-based method for interpolating and manipulating 3D shapes represented as point clouds, that is explicitly designed to preserve intrinsic shape properties. Our approach is based on constructing a dual encoding space that enables shape synthesis and, at the same time, provides links to the intrinsic shape information, which is typically not available on point cloud data. Our method

  • Interpreted Programming Language Extension for 3D Render on the Web
    arXiv.cs.GR Pub Date : 2020-04-03
    Amaro Duarte; Esmitt Ramirez

    There are tools to ease the 2D/3D graphics development for programmers. Sometimes, these are not directly accessible for all users requiring commercial licenses or based on trials, or long learning periods before to use them. In the modern world, the time to release final programs is crucial for the company successfully, also for saving money. Then, if programmers can handle tools to minimize the development

  • Geometry-Driven Detection, Tracking and Visual Analysis of Viscous and Gravitational Fingers
    arXiv.cs.GR Pub Date : 2019-11-27
    Jiayi Xu; Soumya Dutta; Wenbin He; Joachim Moortgat; Han-Wei Shen

    Viscous and gravitational flow instabilities cause a displacement front to break up into finger-like fluids. The detection and evolutionary analysis of these fingering instabilities are critical in multiple scientific disciplines such as fluid mechanics and hydrogeology. However, previous detection methods of the viscous and gravitational fingers are based on density thresholding, which provides limited

  • Learning a Neural 3D Texture Space from 2D Exemplars
    arXiv.cs.GR Pub Date : 2019-12-09
    Philipp Henzler; Niloy J. Mitra; Tobias Ritschel

    We propose a generative model of 2D and 3D natural textures with diversity, visual fidelity and at high computational efficiency. This is enabled by a family of methods that extend ideas from classic stochastic procedural texturing (Perlin noise) to learned, deep, non-linearities. The key idea is a hard-coded, tunable and differentiable step that feeds multiple transformed random 2D or 3D fields into

  • Synchronizing Probability Measures on Rotations via Optimal Transport
    arXiv.cs.GR Pub Date : 2020-04-01
    Tolga Birdal; Michael Arbel; Umut Şimşekli; Leonidas Guibas

    We introduce a new paradigm, $\textit{measure synchronization}$, for synchronizing graphs with measure-valued edges. We formulate this problem as maximization of the cycle-consistency in the space of probability measures over relative rotations. In particular, we aim at estimating marginal distributions of absolute orientations by synchronizing the $\textit{conditional}$ ones, which are defined on

  • PQ-NET: A Generative Part Seq2Seq Network for 3D Shapes
    arXiv.cs.GR Pub Date : 2019-11-25
    Rundi Wu; Yixin Zhuang; Kai Xu; Hao Zhang; Baoquan Chen

    We introduce PQ-NET, a deep neural network which represents and generates 3D shapes via sequential part assembly. The input to our network is a 3D shape segmented into parts, where each part is first encoded into a feature representation using a part autoencoder. The core component of PQ-NET is a sequence-to-sequence or Seq2Seq autoencoder which encodes a sequence of part features into a latent vector

  • Label-Efficient Learning on Point Clouds using Approximate Convex Decompositions
    arXiv.cs.GR Pub Date : 2020-03-30
    Matheus Gadelha; Aruni RoyChowdhury; Gopal Sharma; Evangelos Kalogerakis; Liangliang Cao; Erik Learned-Miller; Rui Wang; Subhransu Maji

    The problems of shape classification and part segmentation from 3D point clouds have garnered increasing attention in the last few years. But both of these problems suffer from relatively small training sets, creating the need for statistically efficient methods to learn 3D shape representations. In this work, we investigate the use of Approximate Convex Decompositions (ACD) as a self-supervisory signal

  • AvatarMe: Realistically Renderable 3D Facial Reconstruction "in-the-wild"
    arXiv.cs.GR Pub Date : 2020-03-30
    Alexandros Lattas; Stylianos Moschoglou; Baris Gecer; Stylianos Ploumpis; Vasileios Triantafyllou; Abhijeet Ghosh; Stefanos Zafeiriou

    Over the last years, with the advent of Generative Adversarial Networks (GANs), many face analysis tasks have accomplished astounding performance, with applications including, but not limited to, face generation and 3D face reconstruction from a single "in-the-wild" image. Nevertheless, to the best of our knowledge, there is no method which can produce high-resolution photorealistic 3D faces from "in-the-wild"

  • Y-net: Multi-scale feature aggregation network with wavelet structure similarity loss function for single image dehazing
    arXiv.cs.GR Pub Date : 2020-03-31
    Hao-Hsiang Yang; Chao-Han Huck Yang; Yi-Chang James Tsai

    Single image dehazing is the ill-posed two-dimensional signal reconstruction problem. Recently, deep convolutional neural networks (CNN) have been successfully used in many computer vision problems. In this paper, we propose a Y-net that is named for its structure. This network reconstructs clear images by aggregating multi-scale features maps. Additionally, we propose a Wavelet Structure SIMilarity

  • Deep 3D Capture: Geometry and Reflectance from Sparse Multi-View Images
    arXiv.cs.GR Pub Date : 2020-03-27
    Sai Bi; Zexiang Xu; Kalyan Sunkavalli; David Kriegman; Ravi Ramamoorthi

    We introduce a novel learning-based method to reconstruct the high-quality geometry and complex, spatially-varying BRDF of an arbitrary object from a sparse set of only six images captured by wide-baseline cameras under collocated point lighting. We first estimate per-view depth maps using a deep multi-view stereo network; these depth maps are used to coarsely align the different views. We propose

  • PointGMM: a Neural GMM Network for Point Clouds
    arXiv.cs.GR Pub Date : 2020-03-30
    Amir Hertz; Rana Hanocka; Raja Giryes; Daniel Cohen-Or

    Point clouds are a popular representation for 3D shapes. However, they encode a particular sampling without accounting for shape priors or non-local information. We advocate for the use of a hierarchical Gaussian mixture model (hGMM), which is a compact, adaptive and lightweight representation that probabilistically defines the underlying 3D surface. We present PointGMM, a neural network that learns

  • Human Motion Transfer with 3D Constraints and Detail Enhancement
    arXiv.cs.GR Pub Date : 2020-03-30
    Yang-Tian Sun; Qian-Cheng Fu; Yue-Ren Jiang; Zitao Liu; Yu-Kun Lai; Hongbo Fu; Lin Gao

    We propose a new method for realistic human motion transfer using a generative adversarial network (GAN), which generates a motion video of a target character imitating actions of a source character, while maintaining high authenticity of the generated results. We tackle the problem by decoupling and recombining the posture information and appearance information of both the source and target characters

  • BSP-Net: Generating Compact Meshes via Binary Space Partitioning
    arXiv.cs.GR Pub Date : 2019-11-16
    Zhiqin Chen; Andrea Tagliasacchi; Hao Zhang

    Polygonal meshes are ubiquitous in the digital 3D domain, yet they have only played a minor role in the deep learning revolution. Leading methods for learning generative models of shapes rely on implicit functions, and generate meshes only after expensive iso-surfacing routines. To overcome these challenges, we are inspired by a classical spatial data structure from computer graphics, Binary Space

  • DR-KFS: A Differentiable Visual Similarity Metric for 3D Shape Reconstruction
    arXiv.cs.GR Pub Date : 2019-11-20
    Jiongchao Jin; Akshay Gadi Patil; Zhang Xiong; Hao Zhang

    We introduce a differential visual similarity metric to train deep neural networks for 3D reconstruction, aimed at improving reconstruction quality. The metric compares two 3D shapes by measuring distances between multi-view images differentiably rendered from the shapes. Importantly, the image-space distance is also differentiable and measures visual similarity, rather than pixel-wise distortion.

  • Hierarchical Optimization Time Integration for CFL-rate MPM Stepping
    arXiv.cs.GR Pub Date : 2019-11-18
    Xinlei Wang; Minchen Li; Yu Fang; Xinxin Zhang; Ming Gao; Min Tang; Danny M. Kaufman; Chenfanfu Jiang

    We propose Hierarchical Optimization Time Integration (HOT) for efficient implicit time-stepping of the Material Point Method (MPM) irrespective of simulated materials and conditions. HOT is an MPM-specialized hierarchical optimization algorithm that solves nonlinear time step problems for large-scale MPM systems near the CFL-limit. HOT provides convergent simulations "out-of-the-box" across widely

  • A Hybrid Lagrangian/Eulerian Collocated Advection and Projection Method for Fluid Simulation
    arXiv.cs.GR Pub Date : 2020-03-27
    Steven W. Gagniere; David A. B. Hyde; Alan Marquez-Razon; Chenfanfu Jiang; Ziheng Ge; Xuchen Han; Qi Guo; Joseph Teran

    We present a hybrid particle/grid approach for simulating incompressible fluids on collocated velocity grids. We interchangeably use particle and grid representations of transported quantities to balance efficiency and accuracy. A novel Backward Semi-Lagrangian method is derived to improve accuracy of grid based advection. Our approach utilizes the implicit formula associated with solutions of Burgers'

  • LIMP: Learning Latent Shape Representations with Metric Preservation Priors
    arXiv.cs.GR Pub Date : 2020-03-27
    Luca Cosmo; Antonio Norelli; Oshri Halimi; Ron Kimmel; Emanuele Rodolà

    In this paper, we advocate the adoption of metric preservation as a powerful prior for learning latent representations of deformable 3D shapes. Key to our construction is the introduction of a geometric distortion criterion, defined directly on the decoded shapes, translating the preservation of the metric on the decoding to the formation of linear paths in the underlying latent space. Our rationale

  • Visual Indeterminacy in GAN Art
    arXiv.cs.GR Pub Date : 2019-10-10
    Aaron Hertzmann

    This paper explores visual indeterminacy as a description for artwork created with Generative Adversarial Networks (GANs). Visual indeterminacy describes images which appear to depict real scenes, but, on closer examination, defy coherent spatial interpretation. GAN models seem to be predisposed to producing indeterminate images, and indeterminacy is a key feature of much modern representational art

  • Automatic Modelling of Human Musculoskeletal Ligaments -- Framework Overview and Model Quality Evaluation
    arXiv.cs.GR Pub Date : 2020-03-24
    Noura Hamze; Lukas Nocker; Nikolaus Rauch; Markus Walzthöni; Fabio Carrillo; Philipp Fürnstahl; Matthias Harders

    Accurate segmentation of connective soft tissues is still a challenging task, which hinders the generation of corresponding geometric models for biomechanical computations. Alternatively, one could predict ligament insertion sites and then approximate the shapes, based on anatomical knowledge and morphological studies. Here, we describe a corresponding integrated framework for the automatic modelling

  • Deformable Style Transfer
    arXiv.cs.GR Pub Date : 2020-03-24
    Sunnie S. Y. Kim; Nicholas Kolkin; Jason Salavon; Gregory Shakhnarovich

    Geometry and shape are fundamental aspects of visual style. Existing style transfer methods focus on texture-like components of style, ignoring geometry. We propose deformable style transfer (DST), an optimization-based approach that integrates texture and geometry style transfer. Our method is the first to allow geometry-aware stylization not restricted to any domain and not requiring training sets

  • Global Illumination of non-Euclidean spaces
    arXiv.cs.GR Pub Date : 2020-03-24
    Tiago Novello; Vinicius da Silva; Luiz Velho

    This paper presents a path tracer algorithm to compute the global illumination of non-Euclidean manifolds. We use the 3D torus as an example.

  • Virtual reality for 3D histology: multi-scale visualization of organs with interactive feature exploration
    arXiv.cs.GR Pub Date : 2020-03-24
    Kaisa Liimatainen; Leena Latonen; Masi Valkonen; Kimmo Kartasalo; Pekka Ruusuvuori

    Virtual reality (VR) enables data visualization in an immersive and engaging manner, and it can be used for creating ways to explore scientific data. Here, we use VR for visualization of 3D histology data, creating a novel interface for digital pathology. Our contribution includes 3D modeling of a whole organ and embedded objects of interest, fusing the models with associated quantitative features

  • DeepDeform: Learning Non-rigid RGB-D Reconstruction with Semi-supervised Data
    arXiv.cs.GR Pub Date : 2019-12-09
    Aljaž Božič; Michael Zollhöfer; Christian Theobalt; Matthias Nießner

    Applying data-driven approaches to non-rigid 3D reconstruction has been difficult, which we believe can be attributed to the lack of a large-scale training corpus. Unfortunately, this method fails for important cases such as highly non-rigid deformations. We first address this problem of lack of data by introducing a novel semi-supervised strategy to obtain dense inter-frame correspondences from a

  • Volumetric density-equalizing reference map with applications
    arXiv.cs.GR Pub Date : 2020-03-21
    Gary P. T. Choi; Chris H. Rycroft

    The density-equalizing map, a technique developed for cartogram creation, has been widely applied to data visualization but only for 2D applications. In this work, we propose a novel method called the volumetric density-equalizing reference map (VDERM) for computing density-equalizing map for volumetric domains. Given a prescribed density distribution in a volumetric domain in $\mathbb{R}^3$, the proposed

  • Rig-space Neural Rendering
    arXiv.cs.GR Pub Date : 2020-03-22
    Dominik Borer; Lu Yuhang; Laura Wuelfroth; Jakob Buhmann; Martin Guay

    Movie productions use high resolution 3d characters with complex proprietary rigs to create the highest quality images possible for large displays. Unfortunately, these 3d assets are typically not compatible with real-time graphics engines used for games, mixed reality and real-time pre-visualization. Consequently, the 3d characters need to be re-modeled and re-rigged for these new applications, requiring

  • Universal Differentiable Renderer for Implicit Neural Representations
    arXiv.cs.GR Pub Date : 2020-03-22
    Lior Yariv; Matan Atzmon; Yaron Lipman

    The goal of this work is to learn implicit 3D shape representation with 2D supervision (i.e., a collection of images). To that end we introduce the Universal Differentiable Renderer (UDR) a neural network architecture that can provably approximate reflected light from an implicit neural representation of a 3D surface, under a wide set of reflectance properties and lighting conditions. Experimenting

  • Neural Contours: Learning to Draw Lines from 3D Shapes
    arXiv.cs.GR Pub Date : 2020-03-23
    Difan Liu; Mohamed Nabail; Aaron Hertzmann; Evangelos Kalogerakis

    This paper introduces a method for learning to generate line drawings from 3D models. Our architecture incorporates a differentiable module operating on geometric features of the 3D model, and an image-based module operating on view-based shape representations. At test time, geometric and view-based reasoning are combined with the help of a neural module to create a line drawing. The model is trained

  • PT2PC: Learning to Generate 3D Point Cloud Shapes from Part Tree Conditions
    arXiv.cs.GR Pub Date : 2020-03-19
    Kaichun Mo; He Wang; Xinchen Yan; Leonidas J. Guibas

    3D generative shape modeling is a fundamental research area in computer vision and interactive computer graphics, with many real-world applications. This paper investigates the novel problem of generating 3D shape point cloud geometry from a symbolic part tree representation. In order to learn such a conditional shape generation procedure in an end-to-end fashion, we propose a conditional GAN "part

  • Latent Space Subdivision: Stable and Controllable Time Predictions for Fluid Flow
    arXiv.cs.GR Pub Date : 2020-03-12
    Steffen Wiewel; Byungsoo Kim; Vinicius C. Azevedo; Barbara Solenthaler; Nils Thuerey

    We propose an end-to-end trained neural networkarchitecture to robustly predict the complex dynamics of fluid flows with high temporal stability. We focus on single-phase smoke simulations in 2D and 3D based on the incompressible Navier-Stokes (NS) equations, which are relevant for a wide range of practical problems. To achieve stable predictions for long-term flow sequences, a convolutional neural

  • A Compact Spectral Descriptor for Shape Deformations
    arXiv.cs.GR Pub Date : 2020-03-10
    Skylar Sible; Rodrigo Iza-Teran; Jochen Garcke; Nikola Aulig; Patricia Wollstadt

    Modern product design in the engineering domain is increasingly driven by computational analysis including finite-element based simulation, computational optimization, and modern data analysis techniques such as machine learning. To apply these methods, suitable data representations for components under development as well as for related design criteria have to be found. While a component's geometry

  • NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis
    arXiv.cs.GR Pub Date : 2020-03-19
    Ben Mildenhall; Pratul P. Srinivasan; Matthew Tancik; Jonathan T. Barron; Ravi Ramamoorthi; Ren Ng

    We present a method that achieves state-of-the-art results for synthesizing novel views of complex scenes by optimizing an underlying continuous volumetric scene function using a sparse set of input views. Our algorithm represents a scene using a fully-connected (non-convolutional) deep network, whose input is a single continuous 5D coordinate (spatial location $(x,y,z)$ and viewing direction $(\theta

  • Neural Cages for Detail-Preserving 3D Deformations
    arXiv.cs.GR Pub Date : 2019-12-13
    Wang Yifan; Noam Aigerman; Vladimir G. Kim; Siddhartha Chaudhuri; Olga Sorkine-Hornung

    We propose a novel learnable representation for detail-preserving shape deformation. The goal of our method is to warp a source shape to match the general structure of a target shape, while preserving the surface details of the source. Our method extends a traditional cage-based deformation technique, where the source shape is enclosed by a coarse control mesh termed \emph{cage}, and translations prescribed

  • Deep Parametric Shape Predictions using Distance Fields
    arXiv.cs.GR Pub Date : 2019-04-18
    Dmitriy Smirnov; Matthew Fisher; Vladimir G. Kim; Richard Zhang; Justin Solomon

    Many tasks in graphics and vision demand machinery for converting shapes into consistent representations with sparse sets of parameters; these representations facilitate rendering, editing, and storage. When the source data is noisy or ambiguous, however, artists and engineers often manually construct such representations, a tedious and potentially time-consuming process. While advances in deep learning

  • Inferring the Material Properties of Granular Media for Robotic Tasks
    arXiv.cs.GR Pub Date : 2020-03-18
    Carolyn Matl; Yashraj Narang; Ruzena Bajcsy; Fabio Ramos; Dieter Fox

    Granular media (e.g., cereal grains, plastic resin pellets, and pills) are ubiquitous in robotics-integrated industries, such as agriculture, manufacturing, and pharmaceutical development. This prevalence mandates the accurate and efficient simulation of these materials. This work presents a software and hardware framework that automatically calibrates a fast physics simulator to accurately simulate

  • Rotate-and-Render: Unsupervised Photorealistic Face Rotation from Single-View Images
    arXiv.cs.GR Pub Date : 2020-03-18
    Hang Zhou; Jihao Liu; Ziwei Liu; Yu Liu; Xiaogang Wang

    Though face rotation has achieved rapid progress in recent years, the lack of high-quality paired training data remains a great hurdle for existing methods. The current generative models heavily rely on datasets with multi-view images of the same person. Thus, their generated results are restricted by the scale and domain of the data source. To overcome these challenges, we propose a novel unsupervised

  • Lighthouse: Predicting Lighting Volumes for Spatially-Coherent Illumination
    arXiv.cs.GR Pub Date : 2020-03-18
    Pratul P. Srinivasan; Ben Mildenhall; Matthew Tancik; Jonathan T. Barron; Richard Tucker; Noah Snavely

    We present a deep learning solution for estimating the incident illumination at any 3D location within a scene from an input narrow-baseline stereo image pair. Previous approaches for predicting global illumination from images either predict just a single illumination for the entire scene, or separately estimate the illumination at each 3D location without enforcing that the predictions are consistent

  • Real-time Image Smoothing via Iterative Least Squares
    arXiv.cs.GR Pub Date : 2020-03-17
    Wei Liu; Pingping Zhang; Xiaolin Huang; Jie Yang; Chuanhua Shen; Ian Reid

    Edge-preserving image smoothing is a fundamental procedure for many computer vision and graphic applications. There is a tradeoff between the smoothing quality and the processing speed: the high smoothing quality usually requires a high computational cost which leads to the low processing speed. In this paper, we propose a new global optimization based method, named iterative least squares (ILS), for

  • Deep Vectorization of Technical Drawings
    arXiv.cs.GR Pub Date : 2020-03-11
    Vage Egiazarian; Oleg Voynov; Alexey Artemov; Denis Volkhonskiy; Aleksandr Safin; Maria Taktasheva; Denis Zorin; Evgeny Burnaev

    We present a new method for vectorization of technical line drawings, such as floor plans, architectural drawings, and 2D CAD images. Our method includes (1) a deep learning-based cleaning stage to eliminate the background and imperfections in the image and fill in missing parts, (2) a transformer-based network to estimate vector primitives, and (3) optimization procedure to obtain the final primitive

  • Fusion-Aware Point Convolution for Online Semantic 3D Scene Segmentation
    arXiv.cs.GR Pub Date : 2020-03-13
    Jiazhao Zhang; Chenyang Zhu; Lintao Zheng; Kai Xu

    Online semantic 3D segmentation in company with real-time RGB-D reconstruction poses special challenges such as how to perform 3D convolution directly over the progressively fused 3D geometric data, and how to smartly fuse information from frame to frame. We propose a novel fusion-aware 3D point convolution which operates directly on the geometric surface being reconstructed and exploits effectively

  • Geodesic Distance Field-based Curved Layer Volume Decomposition for Multi-Axis Support-free Printing
    arXiv.cs.GR Pub Date : 2020-03-12
    Yamin Li; Dong He; Xiangyu Wang; Kai Tang

    This paper presents a new curved layer volume decomposition method for multi-axis support-free printing of freeform solid parts. Given a solid model to be printed that is represented as a tetrahedral mesh, we first establish a geodesic distance field embedded on the mesh, whose value at any vertex is the geodesic distance to the base of the model. Next, the model is naturally decomposed into curved

  • PoseNet3D: Unsupervised 3D Human Shape and Pose Estimation
    arXiv.cs.GR Pub Date : 2020-03-07
    Shashank Tripathi; Siddhant Ranade; Ambrish Tyagi; Amit Agrawal

    Recovering 3D human pose from 2D joints is a highly unconstrained problem. We propose a novel neural network framework, PoseNet3D, that takes 2D joints as input and outputs 3D skeletons and SMPL body model parameters. By casting our learning approach in a student-teacher framework, we avoid using any 3D data such as paired/unpaired 3D data, motion capture sequences, depth images or multi-view images

  • STD-Net: Structure-preserving and Topology-adaptive Deformation Network for 3D Reconstruction from a Single Image
    arXiv.cs.GR Pub Date : 2020-03-07
    Aihua Mao; Canglan Dai; Lin Gao; Ying He; Yong-jin Liu

    3D reconstruction from a single view image is a long-standing prob-lem in computer vision. Various methods based on different shape representations(such as point cloud or volumetric representations) have been proposed. However,the 3D shape reconstruction with fine details and complex structures are still chal-lenging and have not yet be solved. Thanks to the recent advance of the deepshape representations

  • Style-compatible Object Recommendation for Multi-room Indoor Scene Synthesis
    arXiv.cs.GR Pub Date : 2020-03-09
    Yu He; Yun Cai; Yuan-Chen Guo; Zheng-Ning Liu; Shao-Kui Zhang; Song-Hai Zhang; Hong-Bo Fu; Sheng-Yong Chen

    \begin{abstract} Traditional indoor scene synthesis methods often take a two-step approach: object selection and object arrangement. Current state-of-the-art object selection approaches are based on convolutional neural networks (CNNs) and can produce realistic scenes for a single room. However, they cannot be directly extended to synthesize style-compatible scenes for multiple rooms with different

  • DeLTra: Deep Light Transport for Projector-Camera Systems
    arXiv.cs.GR Pub Date : 2020-03-06
    Bingyao Huang; Haibin Ling

    In projector-camera systems, light transport models the propagation from projector emitted radiance to camera-captured irradiance. In this paper, we propose the first end-to-end trainable solution named Deep Light Transport (DeLTra) that estimates radiometrically uncalibrated projector-camera light transport. DeLTra is designed to have two modules: DepthToAtrribute and ShadingNet. DepthToAtrribute

  • Optimizing JPEG Quantization for Classification Networks
    arXiv.cs.GR Pub Date : 2020-03-05
    Zhijing Li; Christopher De Sa; Adrian Sampson

    Deep learning for computer vision depends on lossy image compression: it reduces the storage required for training and test data and lowers transfer costs in deployment. Mainstream datasets and imaging pipelines all rely on standard JPEG compression. In JPEG, the degree of quantization of frequency coefficients controls the lossiness: an 8 by 8 quantization table (Q-table) decides both the quality

  • A Hybrid Lagrangian-Eulerian Method for Topology Optimization
    arXiv.cs.GR Pub Date : 2020-03-02
    Yue Li; Xuan Li; Minchen Li; Yixin Zhu; Bo Zhu; Chenfanfu Jiang

    We propose LETO, a new hybrid Lagrangian-Eulerian method for topology optimization. At the heart of LETO lies in a hybrid particle-grid Material Point Method (MPM) to solve for elastic force equilibrium. LETO transfers density information from freely movable Lagrangian carrier particles to a fixed set of Eulerian quadrature points. The quadrature points act as MPM particles embedded in a lower-resolution

  • PF-Net: Point Fractal Network for 3D Point Cloud Completion
    arXiv.cs.GR Pub Date : 2020-03-01
    Zitian Huang; Yikuan Yu; Jiawen Xu; Feng Ni; Xinyi Le

    In this paper, we propose a Point Fractal Network (PF-Net), a novel learning-based approach for precise and high-fidelity point cloud completion. Unlike existing point cloud completion networks, which generate the overall shape of the point cloud from the incomplete point cloud and always change existing points and encounter noise and geometrical loss, PF-Net preserves the spatial arrangements of the

  • Characterisation of rational and NURBS developable surfaces in Computer Aided Design
    arXiv.cs.GR Pub Date : 2020-03-02
    Leonardo Fernandez-Jambrina

    In this paper we provide a characterisation of rational developable surfaces in terms of the blossoms of the bounding curves and three rational functions $\Lambda$, $M$, $\nu$. Properties of developable surfaces are revised in this framework. In particular, a closed algebraic formula for the edge of regression of the surface is obtained in terms of the functions $\Lambda$, $M$, $\nu$, which are closely

  • Parallelizable global conformal parameterization of simply-connected surfaces via partial welding
    arXiv.cs.GR Pub Date : 2019-03-29
    Gary P. T. Choi; Yusan Leung-Liu; Xianfeng Gu; Lok Ming Lui

    Conformal surface parameterization is useful in graphics, imaging and visualization, with applications to texture mapping, atlas construction, registration, remeshing and so on. With the increasing capability in scanning and storing data, dense 3D surface meshes are common nowadays. While meshes with higher resolution better resemble smooth surfaces, they pose computational difficulties for the existing

  • ICE: An Interactive Configuration Explorer for High Dimensional Categorical Parameter Spaces
    arXiv.cs.GR Pub Date : 2019-07-29
    Anjul Tyagi; Zhen Cao; Tyler Estro; Erez Zadok; Klaus Mueller

    There are many applications where users seek to explore the impact of the settings of several categorical variables with respect to one dependent numerical variable. For example, a computer systems analyst might want to study how the type of file system or storage device affects system performance. A usual choice is the method of Parallel Sets designed to visualize multivariate categorical variables

  • MINA: Convex Mixed-Integer Programming for Non-Rigid Shape Alignment
    arXiv.cs.GR Pub Date : 2020-02-28
    Florian Bernard; Zeeshan Khan Suri; Christian Theobalt

    We present a convex mixed-integer programming formulation for non-rigid shape matching. To this end, we propose a novel shape deformation model based on an efficient low-dimensional discrete model, so that finding a globally optimal solution is tractable in (most) practical cases. Our approach combines several favourable properties: it is independent of the initialisation, it is much more efficient

  • Realtime Simulation of Thin-Shell Deformable Materials using CNN-Based Mesh Embedding
    arXiv.cs.GR Pub Date : 2019-09-26
    Qingyang Tan; Zherong Pan; Lin Gao; Dinesh Manocha

    We address the problem of accelerating thin-shell deformable object simulations by dimension reduction. We present a new algorithm to embed a high-dimensional configuration space of deformable objects in a low-dimensional feature space, where the configurations of objects and feature points have approximate one-to-one mapping. Our key technique is a graph-based convolutional neural network (CNN) defined

  • Learning to Shade Hand-drawn Sketches
    arXiv.cs.GR Pub Date : 2020-02-26
    Qingyuan Zheng; Zhuoru Li; Adam Bargteil

    We present a fully automatic method to generate detailed and accurate artistic shadows from pairs of line drawing sketches and lighting directions. We also contribute a new dataset of one thousand examples of pairs of line drawings and shadows that are tagged with lighting directions. Remarkably, the generated shadows quickly communicate the underlying 3D structure of the sketched scene. Consequently

Contents have been reproduced by permission of the publishers.
全球疫情及响应:BMC Medicine专题征稿