当前位置: X-MOL 学术IEEE Multimed. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Domain Adaptation With Foreground/Background Cues and Gated Discriminators
IEEE Multimedia ( IF 2.3 ) Pub Date : 2020-07-10 , DOI: 10.1109/mmul.2020.3008529
Yong-Xiang Lin, Daniel Stanley Tan, Yung-Yao Chen, Ching-Chun Huang, Kai-Lung Hua

Self-driving cars leverage on semantic segmentation to understand an urban scene. However, it is costly to collect segmentation labels, thus, synthetic datasets are used to train segmentation models. Unfortunately, the synthetic to real domain shift causes these models to perform poorly. Prior works use adversarial training to align features of both synthetic and real-world images. We observe that background objects tend to be similar across domains, while foreground objects tend to have more variations. Using this insight, we propose an adaptation method that uses foreground and background cues and adapt them separately. We also propose a mask-aware gated discriminator that learns soft masks from the input foreground and background masks instead of naively performing binary masking that immediately removes information outside of the predicted masks. We evaluate our method on two different datasets and show that our method outperforms several state-of-the-art baselines, which verifies the effectiveness of our approach.

中文翻译:

具有前景/背景提示和门控鉴别器的域自适应

自动驾驶汽车利用语义分割来了解城市景观。但是,收集分段标签成本很高,因此,使用合成数据集来训练分段模型。不幸的是,从合成域到实际域的转换导致这些模型的性能很差。先前的作品使用对抗训练来调整合成图像和真实图像的特征。我们观察到,背景对象在域之间趋于相似,而前景对象趋于具有更多差异。利用这种见识,我们提出了一种使用前景和背景提示并分别进行调整的适应方法。我们还提出了一种可识别掩码的门控鉴别器,该标识符可从输入的前景和背景掩码中学习软掩码,而不是天真的执行二进制掩码,后者会立即从预测掩码中删除信息。我们在两个不同的数据集上评估了我们的方法,结果表明我们的方法优于几种最新的基准,这证明了我们方法的有效性。
更新日期:2020-09-05
down
wechat
bug