当前位置: X-MOL 学术Front. Comput. Neurosci. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Leveraging Prior Concept Learning Improves Generalization From Few Examples in Computational Models of Human Object Recognition
Frontiers in Computational Neuroscience ( IF 2.1 ) Pub Date : 2021-01-12 , DOI: 10.3389/fncom.2020.586671
Joshua S Rule 1 , Maximilian Riesenhuber 2
Affiliation  

Humans quickly and accurately learn new visual concepts from sparse data, sometimes just a single example. The impressive performance of artificial neural networks which hierarchically pool afferents across scales and positions suggests that the hierarchical organization of the human visual system is critical to its accuracy. These approaches, however, require magnitudes of order more examples than human learners. We used a benchmark deep learning model to show that the hierarchy can also be leveraged to vastly improve the speed of learning. We specifically show how previously learned but broadly tuned conceptual representations can be used to learn visual concepts from as few as two positive examples; reusing visual representations from earlier in the visual hierarchy, as in prior approaches, requires significantly more examples to perform comparably. These results suggest techniques for learning even more efficiently and provide a biologically plausible way to learn new visual concepts from few examples.

中文翻译:


利用先验概念学习提高人类物体识别计算模型中少数示例的泛化能力



人类可以从稀疏数据(有时只是一个例子)中快速准确地学习新的视觉概念。人工神经网络在不同尺度和位置上分层汇集传入信号,其令人印象深刻的性能表明,人类视觉系统的分层组织对其准确性至关重要。然而,这些方法需要比人类学习者更多数量级的示例。我们使用基准深度学习模型来表明,也可以利用层次结构来极大地提高学习速度。我们特别展示了如何使用先前学习但广泛调整的概念表示来从短短两个正面示例中学习视觉概念;与之前的方法一样,重用视觉层次结构中较早的视觉表示需要更多的示例才能进行比较。这些结果提出了更有效地学习的技术,并提供了一种生物学上合理的方法来从几个例子中学习新的视觉概念。
更新日期:2021-01-12
down
wechat
bug