Abstract
We introduce an edge-based procedural texture (EBPT), a procedural model for semi-stochastic texture generation. EBPT quickly generates large textures from a small input image. EBPT focuses on edges as the visually salient features extracted from the input image and organizes into groups with clearly established spatial properties. EBPT allows the users to interactively or automatically design new textures by utilizing the edge groups. The output texture can be significantly larger than the input, and EBPT does not need multiple textures to mimic the input. EBPT-based texture synthesis consists of two major steps, input analysis and texture synthesis. The input analysis stage extracts edges, builds the edge groups, and stores procedural properties. The texture synthesis stage distributes edge groups with affine transformation. This step can be done interactively or automatically using the procedural model. Then, it generates the output using edge group-based seamless image cloning. We demonstrate our method on various semi-stochastic inputs. With just a few input parameters defining the final structure, our method can analyze the input size of \(512\times {512}\) in 0.7 s and synthesize the output texture of \(2048\times {2048}\) pixels in 0.5 s.
Similar content being viewed by others
References
Aliaga, D.G., Demir, I., Benes, B., Wand, M.: Inverse procedural modeling of 3d models for virtual worlds. In: ACM SIGGRAPH 2016 Courses, SIGGRAPH ’16, pp. 16:1–16:316. ACM, New York, NY, USA (2016). https://doi.org/10.1145/2897826.2927323
Barla, P., Breslav, S., Thollot, J., Sillion, F., Markosian, L.: Stroke pattern analysis and synthesis. In: Computer Graphics Forum, vol. 25, pp. 663–671. Wiley Online Library (2006)
Barnes, C., Zhang, F.L.: A survey of the state-of-the-art in patch-based synthesis. Comput. Vis. Media 3(1), 3–20 (2017). https://doi.org/10.1007/s41095-016-0064-2
Biederman, I., Ju, G.: Surface versus edge-based determinants of visual recognition. Cogn. Psychol. 20(1), 38–64 (1988)
Boykov, Y., Funka-Lea, G.: Graph cuts and efficient nd image segmentation. Int. J. Comput. Vis. 70(2), 109–131 (2006)
Chiu, S.N., Stoyan, D., Kendall, W.S., Mecke, J.: Stochastic Geometry and Its Applications. Wiley, London (2013)
Choi, Y., Choi, M., Kim, M., Ha, J.W., Kim, S., Choo, J.: StarGAN: Unified Generative Adversarial Networks for Multi-domain Image-to-Image Translation. arXiv e-prints arXiv:1711.09020 (2017)
Darabi, S., Shechtman, E., Barnes, C., Goldman, D.B., Sen, P.: Image melding: Combining inconsistent images using patch-based synthesis. ACM Trans. Graph. 31(4), 82:1–82:10 (2012). https://doi.org/10.1145/2185520.2185578
Deng, G.: Guided wavelet shrinkage for edge-aware smoothing. IEEE Trans. Image Process. 26(2), 900–914 (2017). https://doi.org/10.1109/TIP.2016.2633941
Dollár, P., Zitnick, C.L.: Structured forests for fast edge detection. In: 2013 IEEE International Conference on Computer Vision (ICCV), pp. 1841–1848 (2013). https://doi.org/10.1109/ICCV.2013.231
Dollár, P., Zitnick, C.L.: Fast edge detection using structured forests. IEEE Trans. Pattern Anal. Mach. Intell. 37(8), 1558–1570 (2015). https://doi.org/10.1109/TPAMI.2014.2377715
Emilien, A., Vimont, U., Cani, M.P., Poulin, P., Benes, B.: Worldbrush: Interactive example-based synthesis of procedural virtual worlds. ACM Trans. Graph. 34(4), 106:1–106:11 (2015). https://doi.org/10.1145/2766975
Freeman, H.: On the encoding of arbitrary geometric configurations. IRE Trans. Electron. Comput. EC-10(2), 260–268 (1961). https://doi.org/10.1109/TEC.1961.5219197
Galerne, B., Lagae, A., Lefebvre, S., Drettakis, G.: Gabor noise by example. ACM Trans. Graph. 31(4), 73:1–73:9 (2012). https://doi.org/10.1145/2185520.2185569
Gastal, E.S.L., Oliveira, M.M.: Domain transform for edge-aware image and video processing. In: ACM SIGGRAPH 2011 Papers, SIGGRAPH ’11, pp. 69:1–69:12. ACM, New York, NY, USA (2011). https://doi.org/10.1145/1964921.1964964
Gilet, G., Dischler, J.M.: An image-based approach for stochastic volumetric and procedural details. In: Proceedings of the 21st Eurographics Conference on Rendering, EGSR’10, pp. 1411–1419. Eurographics Association, Aire-la-Ville, Switzerland, Switzerland (2010). https://doi.org/10.1111/j.1467-8659.2010.01738.x
Gilet, G., Sauvage, B., Vanhoey, K., Dischler, J.M., Ghazanfarpour, D.: Local random-phase noise for procedural texturing. ACM Trans. Graph. 33(6), 195:1–195:11 (2014). https://doi.org/10.1145/2661229.2661249
Glondu, L., Muguercia, L., Marchal, M., Bosch, C., Rushmeier, H., Dumont, G., Drettakis, G.: Example-based fractured appearance. Comput. Graph Forum 31(4), 1547–1556 (2012). https://doi.org/10.1111/j.1467-8659.2012.03151.x
Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial nets. In: Z. Ghahramani, M. Welling, C. Cortes, N.D. Lawrence, K.Q. Weinberger (eds.) Advances in Neural Information Processing Systems 27, pp. 2672–2680. Curran Associates, Inc. (2014). http://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf
Guehl, P., Allegre, R., Dischler, J.M., Benes, B., Galin, E.: Semi-procedural textures using point process texture basis functions. Comput. Graph. Forum 39(4), 159–171 (2020). https://doi.org/10.1111/cgf.14061 (Honorable mention from the Best Papers Committee)
Guehl, P., Allegre, R., Dischler, J.M., Benes, B., Galin, E.: Semi-procedural textures using point process texture basis functions. Comput. Graph. Forum 39(4), 159–171 (2020). https://doi.org/10.1111/cgf.14061
Guérin, E., Digne, J., Galin, E., Peytavie, A., Wolf, C., Benes, B., Martinez, B.: Interactive example-based terrain authoring with conditional generative adversarial networks. ACM Trans. Graph. 36(6), 228:1–228:13 (2017). https://doi.org/10.1145/3130800.3130804
Hu, M.K.: Visual pattern recognition by moment invariants. IRE Trans. Inform. Theory 8(2), 179–187 (1962)
Hu, S.M., Zhang, F.L., Wang, M., Martin, R.R., Wang, J.: Patchnet: A patch-based image representation for interactive library-driven image editing. ACM Trans. Graph. 32(6), 196:1–196:12 (2013). https://doi.org/10.1145/2508363.2508381
Huang, X., Belongie, S.: Arbitrary style transfer in real-time with adaptive instance normalization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510 (2017)
Hurtut, T., Landes, P.E., Thollot, J., Gousseau, Y., Drouillhet, R., Coeurjolly, J.F.: Appearance-guided synthesis of element arrangements by example. In: Proceedings of the 7th International Symposium on Non-photorealistic Animation and Rendering, NPAR ’09, pp. 51–60. ACM, New York, NY, USA (2009). https://doi.org/10.1145/1572614.1572623
Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1125–1134 (2017)
Kaspar, A., Neubert, B., Lischinski, D., Pauly, M., Kopf, J.: Self tuning texture optimization. Comput. Graph. Forum 34(2), 349–359 (2015). https://doi.org/10.1111/cgf.12565
Kingma, D.P., Welling, M.: Auto-Encoding Variational Bayes. arXiv e-prints arXiv:1312.6114 (2013)
Kwatra, V., Essa, I., Bobick, A., Kwatra, N.: Texture optimization for example-based synthesis. In: ACM SIGGRAPH 2005 Papers, SIGGRAPH ’05, pp. 795–802. ACM, New York, NY, USA (2005). https://doi.org/10.1145/1186822.1073263
Lockerman, Y.D., Sauvage, B., Allègre, R., Dischler, J.M., Dorsey, J., Rushmeier, H.: Multi-scale label-map extraction for texture synthesis. ACM Trans. Graph. 35(4), 140:1–140:12 (2016). https://doi.org/10.1145/2897824.2925964
Lukáč, M., Fišer, J., Asente, P., Lu, J., Shechtman, E., Sýkora, D.: Brushables: Example-based edge-aware directional texture painting. Comput. Graph. Forum 34(7), 257–267 (2015). https://doi.org/10.1111/cgf.12764
Lukáč, M., Fišer, J., Bazin, J.C., Jamriška, O., Sorkine-Hornung, A., Sýkora, D.: Painting by feature: Texture boundaries for example-based image creation. ACM Transactions on Graphics (Proceedings of SIGGRAPH 2013) 32(4), 116 (2013)
Ma, C., Wei, L.Y., Tong, X.: Discrete element textures. ACM Trans. Graph. 30(4), 62:1–62:10 (2011). https://doi.org/10.1145/2010324.1964957
Ma, C., Wei, L.Y., Tong, X.: Discrete element textures. In: ACM SIGGRAPH 2011 Papers, SIGGRAPH ’11, pp. 62:1–62:10. ACM, New York, NY, USA (2011).https://doi.org/10.1145/1964921.1964957
Marr, D., Hildreth, E.: Theory of edge detection. Proc. R. Soc. Lond. B Biol. Sci. 207(1167), 187–217 (1980)
Mould, D.: Image-guided fracture. In: Proceedings of Graphics Interface 2005, GI ’05, pp. 219–226. Canadian Human-Computer Communications Society, School of Computer Science, University of Waterloo, Waterloo, Ontario, Canada (2005). http://dl.acm.org/citation.cfm?id=1089508.1089545
Pérez, P., Gangnet, M., Blake, A.: Poisson image editing. ACM Trans. Graph. 22(3), 313–318 (2003). https://doi.org/10.1145/882262.882269
Portilla, J., Simoncelli, E.P.: A parametric texture model based on joint statistics of complex wavelet coefficients. Int. J. Comput. Vis. 40(1), 49–70 (2000). https://doi.org/10.1023/A:1026553619983
Radford, A., Metz, L., Chintala, S.: Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. arXiv e-prints arXiv:1511.06434 (2015)
Sangkloy, P., Lu, J., Fang, C., Yu, F., Hays, J.: Scribbler: Controlling deep image synthesis with sketch and color. (2016). arXiv:1612.00835
Sibbing, D., Pavić, D., Kobbelt, L.: Image synthesis for branching structures. In: Computer Graphics Forum, vol. 29, pp. 2135–2144. Wiley Online Library (2010)
Sohl-Dickstein, J., Weiss, E.A., Maheswaranathan, N., Ganguli, S.: Deep unsupervised learning using nonequilibrium thermodynamics. arXiv e-prints arXiv:1503.03585 (2015)
Štǎva, O., Benes, B., Měch, R., Aliaga, D.G., Krištof, P.: Inverse procedural modeling by automatic generation of l-systems. Comput. Graph. Forum 29(2), 665–674 (2010). https://doi.org/10.1111/j.1467-8659.2009.01636.x
Štǎva, O., Pirk, S., Kratt, J., Chen, B., Měch, R., Deussen, O., Benes, B.: Inverse procedural modelling of trees. Comput. Graph. Forum 33(6), 118–131 (2014). https://doi.org/10.1111/cgf.12282
Wei, L.Y., Lefebvre, S., Kwatra, V., Turk, G.: State of the art in example-based texture synthesis. In: Eurographics 2009, State of the Art Report, EG-STAR, pp. 93–117. Eurographics Association (2009)
Wu, F., Yan, D.M., Dong, W., Zhang, X., Wonka, P.: Inverse procedural modeling of facade layouts. ACM Trans. Graph. 33(4), 121:1–121:10 (2014). https://doi.org/10.1145/2601097.2601162
Wu, Q., Yu, Y.: Feature matching and deformation for texture synthesis. In: ACM SIGGRAPH 2004 Papers, SIGGRAPH ’04, pp. 364–367. ACM, New York, NY, USA (2004). https://doi.org/10.1145/1186562.1015730
Wu, R., Wang, W., Yu, Y.: Optimized synthesis of art patterns and layered textures. IEEE Trans. Visual Comput. Graph. 20(3), 436–446 (2014). https://doi.org/10.1109/TVCG.2013.113
Xian, W., Sangkloy, P., Lu, J., Fang, C., Yu, F., Hays, J.: Texturegan: Controlling deep image synthesis with texture patches. (2017). arXiv:1706.02823
Xu, L., Ren, J.S.J., Yan, Q., Liao, R., Jia, J.: Deep edge-aware filters. In: Proceedings of the 32nd International Conference on International Conference on Machine Learning—Volume 37, ICML’15, pp. 1669–1678. JMLR.org (2015). http://dl.acm.org/citation.cfm?id=3045118.3045296
Zhang, H., Xu, T., Li, H., Zhang, S., Wang, X., Huang, X., Metaxas, D.N.: Stackgan: Text to photo-realistic image synthesis with stacked generative adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 5907–5915 (2017)
Zhou, H., Sun, J., Turk, G., Rehg, J.M.: Terrain synthesis from digital elevation models. IEEE Trans. Vis. Comput. Graph. 13(4), 834–848 (2007)
Zhou, Y., Shi, H., Lischinski, D., Gong, M., Kopf, J., Huang, H.: Analysis and controlled synthesis of inhomogeneous textures. Computer Graphics Forum (Proc. of Eurographics 2017) 36(2) (2017)
Zhou, Y., Zhu, Z., Bai, X., Lischinski, D., Cohen-Or, D., Huang, H.: Non-stationary texture synthesis by adversarial expansion. ACM Trans. Graph. 37(4), 49:1–49:13 (2018). https://doi.org/10.1145/3197517.3201285
Zhu, J., Zhang, R., Pathak, D., Darrell, T., Efros, A.A., Wang, O., Shechtman, E.: Toward multimodal image-to-image translation. (2017). arXiv:1711.11586
Acknowledgements
This research was funded in part by National Science Foundation Grant No. 10001387, Functional Proceduralization of 3D Geometric Models, and National Science Foundation Grant No. 1608762, Inverse Procedural Material Modeling for Battery Design. We thank Dr. Darrell Schulze for his unconditional support and help through this project.
Funding
This research was funded by National Science Foundation Grant No. 10001387, Functional Proceduralization of 3D Geometric Models, and National Science Foundation Grant No. 1608762, Inverse Procedural Material Modeling for Battery Design.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
All authors declare that they have no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Kim, H., Dischler, JM., Rushmeier, H. et al. Edge-based procedural textures. Vis Comput 37, 2595–2606 (2021). https://doi.org/10.1007/s00371-021-02212-4
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00371-021-02212-4