当前位置: X-MOL 学术arXiv.cs.CV › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Objectness-Aware One-Shot Semantic Segmentation
arXiv - CS - Computer Vision and Pattern Recognition Pub Date : 2020-04-06 , DOI: arxiv-2004.02945
Yinan Zhao, Brian Price, Scott Cohen, Danna Gurari

While deep convolutional neural networks have led to great progress in image semantic segmentation, they typically require collecting a large number of densely-annotated images for training. Moreover, once trained, the model can only make predictions in a pre-defined set of categories. Therefore, few-shot image semantic segmentation has been explored to learn to segment from only a few annotated examples. In this paper, we tackle the challenging one-shot semantic segmentation problem by taking advantage of objectness. In order to capture prior knowledge of object and background, we first train an objectness segmentation module which generalizes well to unseen categories. Then we use the objectness module to predict the objects present in the query image, and train an objectness-aware few-shot segmentation model that takes advantage of both the object information and limited annotations of the unseen category to perform segmentation in the query image. Our method achieves a mIoU score of 57.9% and 22.6% given only one annotated example of an unseen category in PASCAL-5i and COCO-20i, outperforming related baselines overall.

中文翻译:

对象感知的一次性语义分割

虽然深度卷积神经网络在图像语义分割方面取得了很大进展,但它们通常需要收集大量密集标注的图像进行训练。此外,一旦经过训练,该模型只能在一组预定义的类别中进行预测。因此,已经探索了少镜头图像语义分割以学习仅从几个带注释的示例中进行分割。在本文中,我们利用对象性来解决具有挑战性的一次性语义分割问题。为了捕捉物体和背景的先验知识,我们首先训练一个物体分割模块,它可以很好地泛化到看不见的类别。然后我们使用 objectness 模块来预测查询图像中存在的对象,并训练一个对象感知的小样本分割模型,该模型利用对象信息和未见类别的有限注释在查询图像中执行分割。我们的方法实现了 57.9% 和 22.6% 的 mIoU 分数,仅给出了 PASCAL-5i 和 COCO-20i 中一个未见类别的注释示例,总体上优于相关基线。
更新日期:2020-04-08
down
wechat
bug