当前位置: X-MOL 学术Int. J. Remote Sens. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Semantic segmentation sample augmentation based on simulated scene generation—case study on dock extraction from high spatial resolution imagery
International Journal of Remote Sensing ( IF 3.4 ) Pub Date : 2021-04-05 , DOI: 10.1080/01431161.2021.1907866
Yalan Zheng 1, 2, 3, 4 , Qian Shen 1, 2, 3, 4 , Min Wang 1, 2, 3, 4 , Mengyuan Yang 1, 2, 3, 4 , Jiru Huang 1, 2, 3, 4 , Chenyang Feng 1, 2, 3, 4
Affiliation  

ABSTRACT

Deep learning-based semantic segmentation methods, such as fully convolutional networks (FCNs), are state-of-the-art techniques for object extraction from high spatial resolution images. However, collecting massive scene-formed training samples typically required in FCNs is time-consuming and labour-intensive. A suit of automatic sample augmentation schemes based on simulated scene generation is proposed in this study to reduce the manual workload. Proposed schemes include style transfer, target embedding, and mixed modes by utilizing techniques, such as texture transfer, image inpainting, and region-line primitive association framework, which automatically expand the sample set on the basis of a small number of real samples. Dock extraction experiments using UNet are conducted on China’s GaoFen-2 imagery with expanded sample sets. Results showed that the proposed schemes can successfully generate sufficient simulation samples, increase sample diversity, and subsequently improve semantic segmentation accuracy. Compared with the results that use the original real sample set, measures of F1-score (F1) and intersection over union (IoU) of dock extraction accuracy demonstrate a maximum improvement of 20.53% and 23.01%, respectively, after sample augmentation.

更新日期:2021-05-09
down
wechat
bug