当前位置: X-MOL 学术arXiv.cs.CG › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Deep Feature Space: A Geometrical Perspective
arXiv - CS - Computational Geometry Pub Date : 2020-06-30 , DOI: arxiv-2007.00062
Ioannis Kansizoglou, Loukas Bampis, Antonios Gasteratos

One of the most prominent attributes of Neural Networks (NNs) constitutes their capability of learning to extract robust and descriptive features from high dimensional data, like images. Hence, such an ability renders their exploitation as feature extractors particularly frequent in an abundant of modern reasoning systems. Their application scope mainly includes complex cascade tasks, like multi-modal recognition and deep Reinforcement Learning (RL). However, NNs induce implicit biases that are difficult to avoid or to deal with and are not met in traditional image descriptors. Moreover, the lack of knowledge for describing the intra-layer properties -- and thus their general behavior -- restricts the further applicability of the extracted features. With the paper at hand, a novel way of visualizing and understanding the vector space before the NNs' output layer is presented, aiming to enlighten the deep feature vectors' properties under classification tasks. Main attention is paid to the nature of overfitting in the feature space and its adverse effect on further exploitation. We present the findings that can be derived from our model's formulation, and we evaluate them on realistic recognition scenarios, proving its prominence by improving the obtained results.

中文翻译:

深度特征空间:几何视角

神经网络 (NN) 最突出的属性之一构成了它们学习从高维数据(如图像)中提取鲁棒性和描述性特征的能力。因此,这种能力使得它们作为特征提取器的利用在大量现代推理系统中特别频繁。它们的应用范围主要包括复杂的级联任务,如多模态识别和深度强化学习(RL)。然而,NN 会导致隐性偏差,这些偏差难以避免或处理,并且在传统图像描述符中无法满足。此外,缺乏描述层内属性的知识——因此它们的一般行为——限制了提取特征的进一步适用性。手里拿着纸,一种在神经网络输出层出现之前可视化和理解向量空间的新方法,旨在启发分类任务下的深层特征向量的属性。主要关注特征空间中过度拟合的性质及其对进一步开发的不利影响。我们展示了可以从我们的模型公式中得出的发现,并在现实的识别场景中对它们进行评估,通过改进获得的结果来证明其重要性。
更新日期:2020-07-02
down
wechat
bug