当前位置: X-MOL 学术IEEE Trans. Image Process. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Learning Layer-Skippable Inference Network.
IEEE Transactions on Image Processing ( IF 10.8 ) Pub Date : 2020-08-28 , DOI: 10.1109/tip.2020.3018269
Yu-Gang Jiang , Changmao Cheng , Hangyu Lin , Yanwei Fu

The process of learning good representations for machine learning tasks can be very computationally expensive. Typically, the model learned on training set is leveraged to infer the labels of testing data. Interestingly, this learning and inference paradigm, however, is quite different from the typical inference scheme of human biological visual systems. Essentially, neuroscience studies have shown that the right hemisphere of the human brain predominantly makes a fast processing of low-frequency spatial signals, while the left hemisphere more focuses on analyzing high-frequency information in a slower way. And the low-pass analysis helps facilitate the high-pass analysis via feedback. Inspired by this biological vision mechanism, this article explores the possibility of learning a layer-skippable inference network. Specifically, we propose a layer-skippable network that dynamically carries out coarse-to-fine object categorization. Such a network has two branches to jointly deal with both coarse and fine-grained classification tasks. The layer-skipping mechanism is proposed to learn a gating network by generating dynamic inference graphs, and reducing the computational cost by detouring the inference path from some layers. This adaptive path inference strategy endows the deep networks with dynamic structures, making the networks enjoy greater flexibility and larger capacity. To efficiently train the gating network, a novel ranking-based loss function is adopted. Furthermore, the learned representations are enhanced by the proposed top-down feedback mechanism and feature-wise affine transformation, individually. The former one employs features of a coarse branch to help the fine-grained object recognition task, while the latter one encodes the selected path to enhance the final feature representations. Extensive experiments are conducted on several widely used coarse-to-fine object categorization benchmarks, and promising results are achieved by our proposed model. Quite surprisingly, our layer-skipping mechanism improves the network robustness to adversarial attacks.

中文翻译:


学习可跳过层的推理网络。



学习机器学习任务的良好表示的过程可能需要非常昂贵的计算成本。通常,利用在训练集上学习的模型来推断测试数据的标签。有趣的是,这种学习和推理范式与人类生物视觉系统的典型推理方案有很大不同。从本质上讲,神经科学研究表明,人脑的右半球主要对低频空间信号进行快速处理,而左半球则更专注于以较慢的方式分析高频信息。低通分析有助于通过反馈促进高通分析。受这种生物视觉机制的启发,本文探讨了学习可跳层推理网络的可能性。具体来说,我们提出了一种可跳过层的网络,可以动态地进行从粗到细的对象分类。这样的网络有两个分支来共同处理粗粒度和细粒度的分类任务。提出了跳层机制,通过生成动态推理图来学习门网络,并通过绕行某些层的推理路径来降低计算成本。这种自适应路径推理策略赋予深层网络动态结构,使网络具有更大的灵活性和更大的容量。为了有效地训练门网络,采用了一种新颖的基于排序的损失函数。此外,所学习的表示通过所提出的自上而下的反馈机制和特征仿射变换分别得到增强。 前者利用粗分支的特征来帮助细粒度的对象识别任务,而后者对所选路径进行编码以增强最终的特征表示。在几个广泛使用的从粗到细的对象分类基准上进行了大量的实验,我们提出的模型取得了有希望的结果。令人惊讶的是,我们的跳层机制提高了网络对对抗性攻击的鲁棒性。
更新日期:2020-09-08
down
wechat
bug