Elsevier

Computers & Graphics

Volume 91, October 2020, Pages 95-107
Computers & Graphics

Technical Section
Content-aware texture deformation with dynamic control

https://doi.org/10.1016/j.cag.2020.07.006Get rights and content

Highlights

  • Dynamic textures improve the appearance of 3D virtual objects across time.

  • Textures are deformed in real-time on GPU without heavy simulation.

  • The deformation is controlled by the geometry or by external factors.

  • The deformation is finely defined at texel scale.

  • Complex thin structures are correctly stretched and sampled.

Abstract

Textures improve the appearance of virtual scenes by mapping visual details on the surface of 3D objects. Various scenarios – such as real-time animation, interactive texture modelling, or offline post-production – require textures to be deformed in a controllable and plausible manner. We propose a novel approach to model and control texture deformations, which is easy to implement in a standard graphics pipeline. The deformation is implemented at pixel resolution as a warping in the parametric domain. The warping is controlled locally and dynamically by real-time integration along the streamlines of a pre-computed flow field. We propose a technique to pre-compute the flow field from a simple scalar map representing heterogeneous dynamic behaviors. Moreover, to manage sampling issues arising in over-stretched areas during deformation, we provide a mechanism based on re-sampling and texture synthesis. Warping may alternatively be controlled by deformation of the underlying surface, environment parameters or interactive editing, which demonstrates the versatility of our approach.

Introduction

Textures are a common way to enhance the realism of virtual scenes. Their mechanism to increase user immersion is to provide an approximate solution to complex light-matter interaction related to the presence of micro- and meso-scale surface details that would be otherwise hard to explicitly simulate due to memory or computation cost. Multiple texture layers enable representing rich information about surface material and fine geometry, including color (albedo), normal, displacement, shininess, etc. Mapping textures onto the surface of 3D objects leads to great results in terms of surface appearance for static scenes.

However, texturing is much more challenging when animation or time-dependent effects are involved. Besides natural phenomena, such as ageing, weathering, or drying effects, animated objects often imply evolving visual patterns. Fluids show weakly structured patterns, that move, appear, disappear, or merge. By contrast, solids show complex patterns that are distorted, stretched or sheared. Modelling such dynamic behaviours involves the texture, the (possibly animated) geometry, and the mapping / parameterization. While these three aspects have been given a lot of attention as separate topics, the proper interplay between them remains a largely open problem.

We introduce a new model for real-time texture deformation, which is represented as a warping of the parametric domain onto itself. The novelty of our approach is to define the warping as the advection of the parametric domain in a flow field. This field is pre-computed and static. Dynamics is introduced by per-pixel integration time-steps, which makes possible to control the deformation locally. Our model comes with the following benefits:

  • The deformation is content-aware. This is achieved thanks to the fact that everything is computed per-pixel in the parametric domain, on the basis of a vector field derived from the texture content itself. For instance, in Fig. 1, the flowers are less deformed than the stretchable denim.

  • The model is versatile. It spans multiple visual effects, such as non-homogeneous elasticity and feature shrinkage (see Fig. 1). This is possible because we do not rely on a specialized physical simulation.

  • The deformation is controlled locally and dynamically. It enables our model to be used within any interactive animation or editing framework. We show different scenarios, including automatic guidance from the local geometric deformation of the underlying surface, control through environment parameters, or interactive texture editing under user’s control.

  • Implementation within the standard rendering pipeline is straightforward and efficient. At each time step, all texels update their warping in parallel with a trivial integration scheme, while the rendering relies on standard interpolation and MIP-mapping techniques.

  • Unpleasant visual distortions due to extreme local deformations are solved by combining a re-sampling technique and local detail synthesis, pre-computed and stored in a texture stack. We can thus compute a plausible appearance for thin connected structures, such as cracks or joints in cellular patterns.

Defining a flow field manually would be tedious and non intuitive. We provide a technique to derive the field from a scalar map. Such a scalar map, which is easier and more intuitive to draw for an artist, may represent, for instance, the expected texel rigidity in an elastic deformation scenario.

Our texture stack pre-integrates a warping with constant time-steps and re-samples the texture accordingly. Texture details are reintroduced in over-stretched regions by a state-of-the-art synthesis algorithm. At run-time, while the warping evolves, we keep track of the deformation magnitude at each texel so as to determine the most appropriate stack level to fetch.

In addition to the usual material texture layers, the simplest version of our model (i.e. without stack) only requires one additional map to store both the flow field and the warping. The final rendering then requires only one texture indirection to determine the actual texture coordinates from the warping, making the whole process easy to integrate to the graphics pipeline.

Section snippets

Image retargeting

Image retargeting is an editing approach that aims at producing a re-scaled version of an input image while preserving its most prominent features (which can be either detected automatically or specified by the user) by removing, duplicating or displacing some existing content [1], [2], [3]. It has been used, among other things, for the synthesis of architectural scenes [4], [5], where features of facades can be easily identified and extracted for editing purposes.

The goal of retargeting is

Motivation

Let S be a surface, and X its embedding into 3D, defined as :pXxwhere pS and xR3 is a 3D position. When S is animated, its 3D geometry Xt(S) depends on the time t.

Standard 2D texture mapping is defined through a composition:pUuCColor(p)where U is the parameterization, u is a 2D parametric coordinate and C is the texture, stored as an image, which maps the parametric domain to a color. Usually, U and C are pre-computed according to X. U is thus most often time invariant. The texture then

Flow field generation

Designing an appropriate flow field v is a delicate task. In this section, we propose a technique that avoids a painful manual drawing of a vector field. Instead, the user provides a scalar map R from which the flow field is computed automatically by solving the equation:div(v)=R

The intuition is driven by an analogy with fluids. Regions with positive divergence (e.g. sources) repulse the parameterization, so that the texture shrinks when advected. Conversely, a negative divergence (e.g.

Warping control

The advection – and thus the deformation – is controlled by the integration time-steps dμ which are defined per pixel. This allows for local and dynamic control. We propose several ways to control texture deformation, motivated by various applications, and discuss results. In an animation scenario, we compute the time-steps from the underlying geometry. In an editing scenario, we define the time-steps by a simple brush. Lastly, we illustrate control by environment parameters.

Re-sampling and detail synthesis

The model presented so far behaves well for smooth deformations, which requires both v and dμ to be smooth over the domain. This is acceptable for data such as the flowers in Fig. 2. However we want to treat more challenging data, such as the mortar between the bricks in Fig. 6 or the cracks in Fig. 7. In these examples, thin features undergo extreme local deformations, which causes two problems: the sampling is irregular and the details are over-stretched. Our goal is to get the fourth row in

Results

We have shown many results throughout the paper involving color maps (Figs. 1–10). Fig. 12 shows two other textures mapped on an animated bouncing cube (see also the accompanying video). As can be seen, the texture deforms in a consistent manner, elastic texels being shrunk on geometry compression and stretched on extension. Our method is oblivious to the type of data stored in the texture: in Fig. 6 we present a texture with transparency; in Fig. 13 the deformation is also applied to a

Conclusion

We presented a novel model for content-aware texture deformation at texel resolution, based on the advection of the parametric domain in a pre-computed flow field. The deformation can be controlled locally and dynamically using various criteria, such as geometry, interactive editing, or environment parameters. Integration in the graphics pipeline is easy, and both memory and computation loads are kept low. We showed various examples, including challenging thin structures with strong

CRediT authorship contribution statement

Geoffrey Guingo: Software, Conceptualization, Methodology, Writing - review & editing. Frédéric Larue: Software, Writing - review & editing, Methodology, Validation. Basile Sauvage: Software, Conceptualization, Methodology. Nicolas Lutz: Software. Jean-Michel Dischler: Writing - review & editing, Funding acquisition, Supervision. Marie-Paule Cani: Writing - review & editing, Funding acquisition, Supervision.

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgments

This work has been partially funded by the project HDWorlds from the Agence Nationale de la Recherche (ANR-16-CE33-0001) and by the advanced grant 291184 EXPRESSIVE from the European Research Council (ERC-2011-ADG 20110209).

References (25)

  • S. Lefebvre et al.

    Parallel controllable texture synthesis

    ACM Trans Graph

    (2005)
  • X. Pennec et al.

    A Riemannian framework for tensor computing

    Int J Comput Vis

    (2006)
  • S. Avidan et al.

    Seam carving for content-aware image resizing

    ACM Trans Graph

    (2007)
  • T.S. Cho et al.

    The patch transform and its applications to image editing

    IEEE conference on computer vision and pattern recognition

    (2008)
  • Y. Pritch et al.

    Shift-map image editing

    Proceedings of the IEEE international conference on computer vision

    (2009)
  • S. Lefebvre et al.

    By-example synthesis of architectural textures

    ACM transactions on graphics (SIGGRAPH conference proceedings)

    (2010)
  • M. Cabral et al.

    Structure-preserving reshape for textured architectural scenes

    Comput Graph Forum

    (2009)
  • A.W. Bargteil et al.

    A texture synthesis method for liquid animations

    Proceedings of the 2006 ACM SIGGRAPH/Eurographics symposium on computer animation

    (2006)
  • V. Kwatra et al.

    Texturing fluids

    IEEE Trans Vis Comput Graph

    (2007)
  • J. Gagnon et al.

    Distribution update of deformable patches for texture synthesis on the free surface of fluids

    Comput Graph Forum

    (2019)
  • V. Kwatra et al.

    Texture optimization for example-based synthesis

    ACM Trans Graph

    (2005)
  • Q. Yu et al.

    Lagrangian texture advection: preserving both spectrum and velocity field

    IEEE Trans Vis Comput Graph

    (2011)
  • Cited by (0)

    View full text