当前位置: X-MOL 学术Mach. Learn. Sci. Technol. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Data augmentation with Mobius transformations
Machine Learning: Science and Technology ( IF 6.013 ) Pub Date : 2021-03-02 , DOI: 10.1088/2632-2153/abd615
Sharon Zhou 1 , Jiequan Zhang 1 , Hang Jiang 1 , Torbjrn Lundh 2 , Andrew Y Ng 1
Affiliation  

Data augmentation has led to substantial improvements in the performance and generalization of deep models, and remains a highly adaptable method to evolving model architectures and varying amounts of data—in particular, extremely scarce amounts of available training data. In this paper, we present a novel method of applying Mbius transformations to augment input images during training. Mbius transformations are bijective conformal maps that generalize image translation to operate over complex inversion in pixel space. As a result, Mbius transformations can operate on the sample level and preserve data labels. We show that the inclusion of Mbius transformations during training enables improved generalization over prior sample-level data augmentation techniques such as cutout and standard crop-and-flip transformations, most notably in low data regimes.



中文翻译:

使用 Mobius 变换进行数据增强

数据增强已导致深度模型的性能和泛化能力显着提高,并且仍然是一种高度适应不断发展的模型架构和不同数量的数据(尤其是极其稀缺的可用训练数据)的方法。在本文中,我们提出了一种在训练期间应用 Mbius 变换来增强输入图像的新方法。Mbius 变换是双射共形映射,它概括了图像平移以对像素空间中的复杂反演进行操作。因此,Mbius 转换可以在样本级别上进行操作并保留数据标签。我们表明,在训练过程中包含 Mbius 变换可以比先前的样本级数据增强技术(例如剪切和标准裁剪和翻转变换)改进泛化能力,

更新日期:2021-03-02
down
wechat
bug