当前位置: X-MOL 学术IEEE Trans. Pattern Anal. Mach. Intell. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
RefineNet: Multi-Path Refinement Networks for Dense Prediction.
IEEE Transactions on Pattern Analysis and Machine Intelligence ( IF 23.6 ) Pub Date : 2019-01-18 , DOI: 10.1109/tpami.2019.2893630
Guosheng Lin , Fayao Liu , Anton Milan , Chunhua Shen , Ian Reid

Recently, very deep convolutional neural networks (CNNs) have shown outstanding performance in object recognition and have also been the first choice for dense prediction problems such as semantic segmentation and depth estimation. However, repeated subsampling operations like pooling or convolution striding in deep CNNs lead to a significant decrease in the initial image resolution. Here, we present RefineNet , a generic multi-path refinement network that explicitly exploits all the information available along the down-sampling process to enable high-resolution prediction using long-range residual connections. In this way, the deeper layers that capture high-level semantic features can be directly refined using fine-grained features from earlier convolutions. The individual components of RefineNet employ residual connections following the identity mapping mindset, which allows for effective end-to-end training. Further, we introduce chained residual pooling, which captures rich background context in an efficient manner. We carry out comprehensive experiments on semantic segmentation which is a dense classification problem and achieve good performance on seven public datasets. We further apply our method for depth estimation and demonstrate the effectiveness of our method on dense regression problems.

中文翻译:

RefineNet:用于密集预测的多路径优化网络。

最近,非常深的卷积神经网络(CNN)在对象识别方面表现出出色的性能,并且也成为诸如语义分割和深度估计等密集预测问题的首选。但是,重复的二次采样操作(如合并或卷积跨入深层CNN)会导致初始图像分辨率显着下降。在这里,我们介绍精炼网 ,这是一种通用的多路径优化网络,可明确利用降采样过程中的所有可用信息,以使用远程残差连接实现高分辨率预测。这样,可以使用早期卷积中的细粒度特征直接完善捕获高级语义特征的更深层。RefineNet的各个组件采用遵循身份映射思维方式的剩余连接,从而可以进行有效的端到端培训。此外,我们引入了链式残差池,以有效的方式捕获丰富的背景上下文。我们对语义分割(这是一个密集的分类问题)进行了全面的实验,并在七个公共数据集上取得了良好的性能。
更新日期:2020-04-22
down
wechat
bug