Elsevier

Computers & Graphics

Volume 90, August 2020, Pages 135-144
Computers & Graphics

Technical Section
Predicting ready-made garment dressing fit for individuals based on highly reliable examples

https://doi.org/10.1016/j.cag.2020.06.002Get rights and content

Highlights

  • Generating highly reliable body-garment paired data by digitalizing the garment on a shape-changing robotic mannequin with the method of physically simulating the twin virtual garment with the guidance of vision.

  • The paired data of a template garment are generalized to be suitable to garments with similar style by scaling the template garment-body ease-allowance, i.e., the extra measurement added to the body measurement, for different sized human bodies.

  • Deformation characters of a garment template are transformed to daily garments to predict their dressing effects on different persons by training a CNN-based network, which can obtain plausible results.

Abstract

Predicting the dressing fit of a ready-made garment for different individuals is a challenging problem in online garment sales. At present, the main solution involves physics-based virtual garment simulation, which has limitations in terms of a lack of sufficient reality and high computational costs. In this study, we developed a novel example-based method to guarantee the high reality and efficiency of garment dressing fit prediction using two methods. First, highly reliable examples were captured with the assistance of a robotic mannequin, thereby ensuring the authenticity of the sample data. Second, a mapping was established between the body-garment ease allowance and garment deformations using a convolutional neural network-based network in order to address the problem of dressing fit prediction for ready-made garments on different individuals. This method can also be extended to predicting the dressing fit for ready-made garments with similar styles. Experiments showed our virtual clothes try-on system obtained acceptable precision with a good sense of reality, and it can potentially be used in many applications such as three-dimensional garment design and online clothes retail.

Introduction

In general, garments are classified as ready-made or custom-made, but ready-made garments still dominate the clothing market. Before buying a ready-made garment, people usually need to try it on to check whether it fits well. Due to the rapid development of clothing E-commerce, online sales now account for almost two-thirds of ready-made garments sales [1]. However, due to the lack of effective methods for helping customers to perceive the dressing fit on themselves, the return rate is high at up to 25% of online sales. Therefore, predicting the dressing fit of ready-made garments for individuals is an important but challenging problem in online garments sales.

One possible solution to this problem is physics-based virtual garment simulation [2], [3], which simulates the deformation of three-dimensional (3D) virtual garments on 3D avatars of customers. Great progress has been made since the first clothing simulation method was proposed in the late 1980s, but clothing simulation is still insufficient realistic [4], [5]. This is mainly because the physical models of clothing used in simulation, such as the mass-spring model[6], position-based model[7], or finite element methods [8], [9], are simplified from the real clothing materials, thereby leading to differences in the micro- and macro- mechanical characteristics of the virtual clothing and real clothing. Thus, the virtual simulation results usually deviate much from the real clothing deformation and they cannot reflect the real dressing fit well.

To improve the realism of clothing simulations, many studies have aimed to reconstruct the dynamic deformation of garments based on computer vision methods [10], [11]. These studies mainly focused on reconstructing the deformation of a garment on a moving human body. Using these data, the deformation of a garment caused by human body posture changes can be modeled in a data-driven manner [12], [13].

Theoretically, it is possible to extend previous methods to reconstruct the dressing fit of a garment on various human bodies, but it is impractical to ask a huge number of people to try on a garment and scan them because of the high costs incurred.

Thus, in the present study, we used a shape-changing robotic mannequin [14] to imitate different shapes of various human bodies. Consequently, the dressing fit of a garment on various human bodies could be simulated and reconstructed by deforming the garment with the robotic mannequin. The mapping between body-garment ease allowance and deformations of a garment were trained using the reconstructed data. Therefore, even without trying on a garment physically, its dressing fit on a specific person can still be estimated. However, it is still uneconomic to reconstruct every garment on sale. To address this issue, we developed a new strategy for estimating the dressing fit of a class of garments based on the data captured from a template garment using the methods proposed in this study, as shown in Fig. 1. The main contributions of this study summarized as follows:

  • A highly reliable body-garment paired data acquisition approach is proposed. By dressing up a shape-changing robotic mannequin that simulates different shapes of various human bodies, the garment deformations are reconstructed based on physical simulations, where constraints on marker positions are computed by stereo vision.

  • Instead of mapping directly between the garment deformations and human body shapes, we obtained the mapping indirectly by training the relationship between the garment deformations and body-garment ease allowance, thereby allowing good training results to be trained with fewer data. The map is set in a nonlinear manner rather than the linear manner used in most previously proposed approaches in order to fit better with the intrinsic nonlinear relationship between body shape and garment deformation.

The remainder of this paper is organized as follows. In Section 2, we review previous research into virtual try-on methods, 3D human modeling methods, and robotic mannequin. In Section 3, we explain the pipeline used in our study. After describing the position-constrained physical deformation of the template garment in Section 4, the method for mapping from body-garment ease allowance to garment states is explained in Section 5. The experimental results are presented in Section 6. We give our conclusions in Section 7.

Section snippets

Related work

Previous research can be roughly divided into three areas: virtual try-on systems using deformation-, example-, and image-based approaches; robotic mannequins for simulating different shapes of various human bodies; and 3D human modeling to reconstruct human bodies in 3D based on images or depths.

Overview

In this study, we developed a highly reliable example-based dressing fit prediction method, which guarantees high realistic and efficient garment dressing fit prediction by using highly reliable examples as well as accurate mapping between body-garment ease allowance and garment deformations. The pipeline employed in our work is shown in Fig. 1. Intuitively, we utilized the RGB [52] or depth [50] images of real human bodies as inputs to generate corresponding textured and detailed clothing mesh

Template garment modeling

In general, the performance of physics-based simulators is sensitive to the parameters, which are simplifications and approximations of fabric properties. Therefore, the simulation results may deviate from the real state in some cases. To address this problem, we obtain the real spatial positions of some feature points on clothing, in order to constrain the deformation of the physical simulator. The experimental results showed that our method could obtain a better representation of real state

Template garment simulation

The quality and quantity of training data play important roles in enhancing the performance of models. To obtain a more reliable mapping from body-garment ease allowance to the garment dressing fit, it was necessary to expand the size of human body database, thereby making it more representative. Therefore, we performed data augmentation for 3D human bodies with a manifold structure. In addition, virtual clothes and 3D human bodies are usually represented by thousands or tens of thousands of

Experiments

To evaluate the performance of our method, three types of experiments were conducted. First, we compared the performance of our CNN-based method with traditional linear and nonlinear methods. The second experiment compared the performance of ARCSim and our method for virtual human bodies. Finally, the third experiment was conducted based on real people to demonstrate the practicality and effectiveness of our approach.

Conclusions

In this study, we developed a novel ready-made garment dressing fit prediction method based on highly reliable examples. A robotic mannequin wearing a specially designed template garment was used to mimic different human body shapes and their corresponding dressing fit. The spatial positions of markers were captured by a stereoscopic vision system and then used to constrain and optimize the simulation results of produced with ARCSim. Using the database collected by our system, the mappings from

CRediT authorship contribution statement

Haocan Xu: Methodology, Software, Validation, Data curation, Writing - original draft. Jituo Li: Conceptualization, Writing - original draft. Guodong Lu: Writing - review & editing. Dongliang Zhang: Writing - review & editing. Juncai Long: Writing - review & editing.

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgments

This work was supported in part by the National Key Research and Development Program of China (2018YFB1700700), the National Natural Science Foundation of China (61732015, 61972340), Zhejiang Provincial Natural Science Foundation of China (LY18F020004), the Fundamental Research Funds for the Central Universities (2019QNA4001), and Research Funding of Zhejiang University Robotics Institute.

References (59)

  • Y. Kita et al.

    A deformable model driven method for handling clothes

    Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004.

    (2004)
  • R. Narain et al.

    Adaptive anisotropic remeshing for cloth simulation

    ACM transactions on graphics (TOG)

    (2012)
  • R. Narain et al.

    Folding and crumpling adaptive sheets

    ACM Transactions on Graphics (TOG)

    (2013)
  • R. White et al.

    Capturing and animating occluded cloth

    ACM Transactions on Graphics (TOG)

    (2007)
  • T. Yu et al.

    Simulcap: Single-view human performance capture with cloth simulation

    2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)

    (2019)
  • W. Xu et al.

    Sensitivity-optimized rigging for example-based real-time clothing synthesis

    ACM Transactions on Graphics (TOG)

    (2014)
  • H. Wang et al.

    Example-based wrinkle synthesis for clothing animation

    Acm Transactions on Graphics (TOG)

    (2010)
  • G. Pons-Moll et al.

    Clothcap: seamless 4d clothing capture and retargeting

    ACM Transactions on Graphics (TOG)

    (2017)
  • R. Brouet et al.

    Design preserving garment transfer

    ACM Trans Graph

    (2012)
  • J. Christensen et al.

    Automatic motion synthesis for 3d mass-spring models.

    Vis Comput

    (1997)
  • L. Kavan et al.

    Physics-inspired upsampling for cloth simulation in games

    ACM Transactions on Graphics (TOG)

    (2011)
  • D. Kim et al.

    Near-exhaustive precomputation of secondary cloth effects

    ACM Transactions on Graphics (TOG)

    (2013)
  • R. Gillette et al.

    Real-time dynamic wrinkling of coarse animated cloth

    Proceedings of the 14th ACM SIGGRAPH/Eurographics Symposium on Computer Animation

    (2015)
  • P. Guan et al.

    Drape: dressing any person.

    ACM Trans Graph

    (2012)
  • D. Anguelov et al.

    Scape: shape completion and animation of people

    ACM transactions on graphics (TOG)

    (2005)
  • I. Santesteban et al.

    Learning-based animation of clothing for virtual try-on

    Computer Graphics Forum

    (2019)
  • E. Gundogdu et al.

    Garnet: A two-stream network for fast and accurate 3d cloth draping

    Proceedings of the IEEE International Conference on Computer Vision

    (2019)
  • B. Zhou et al.

    Garment modeling from a single image

    Computer graphics forum

    (2013)
  • S. Hauswiesner et al.

    Virtual try-on through image-based rendering

    IEEE Trans Vis Comput Graph

    (2013)
  • Cited by (3)

    This article was recommended for publication by K Xu.

    View full text