当前位置: X-MOL 学术Front. Neurorobotics › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A Path Toward Explainable AI and Autonomous Adaptive Intelligence: Deep Learning, Adaptive Resonance, and Models of Perception, Emotion, and Action.
Frontiers in Neurorobotics ( IF 3.1 ) Pub Date : 2020-06-25 , DOI: 10.3389/fnbot.2020.00036
Stephen Grossberg 1
Affiliation  

Biological neural network models whereby brains make minds help to understand autonomous adaptive intelligence. This article summarizes why the dynamics and emergent properties of such models for perception, cognition, emotion, and action are explainable, and thus amenable to being confidently implemented in large-scale applications. Key to their explainability is how these models combine fast activations, or short-term memory (STM) traces, and learned weights, or long-term memory (LTM) traces. Visual and auditory perceptual models have explainable conscious STM representations of visual surfaces and auditory streams in surface-shroud resonances and stream-shroud resonances, respectively. Deep Learning is often used to classify data. However, Deep Learning can experience catastrophic forgetting: At any stage of learning, an unpredictable part of its memory can collapse. Even if it makes some accurate classifications, they are not explainable and thus cannot be used with confidence. Deep Learning shares these problems with the back propagation algorithm, whose computational problems due to non-local weight transport during mismatch learning were described in the 1980s. Deep Learning became popular after very fast computers and huge online databases became available that enabled new applications despite these problems. Adaptive Resonance Theory, or ART, algorithms overcome the computational problems of back propagation and Deep Learning. ART is a self-organizing production system that incrementally learns, using arbitrary combinations of unsupervised and supervised learning and only locally computable quantities, to rapidly classify large non-stationary databases without experiencing catastrophic forgetting. ART classifications and predictions are explainable using the attended critical feature patterns in STM on which they build. The LTM adaptive weights of the fuzzy ARTMAP algorithm induce fuzzy IF-THEN rules that explain what feature combinations predict successful outcomes. ART has been successfully used in multiple large-scale real world applications, including remote sensing, medical database prediction, and social media data clustering. Also explainable are the MOTIVATOR model of reinforcement learning and cognitive-emotional interactions, and the VITE, DIRECT, DIVA, and SOVEREIGN models for reaching, speech production, spatial navigation, and autonomous adaptive intelligence. These biological models exemplify complementary computing, and use local laws for match learning and mismatch learning that avoid the problems of Deep Learning.

中文翻译:

通往可解释的AI和自主自适应智能的道路:深度学习,自适应共振以及感知,情感和行动模型。

生物神经网络模型,大脑可以通过这些模型来帮助理解自主的适应性智能。本文总结了为什么这种用于感知,认知,情感和动作的模型的动力学和新兴特性是可以解释的,并因此可以在大规模应用中可靠地实现。它们的可解释性的关键是这些模型如何结合快速激活或短期记忆(STM)轨迹,学习权重或长期记忆(LTM)轨迹。视觉和听觉感知模型分别在表面覆盖共振和流覆盖共振中对视觉表面和听觉流具有可解释的自觉STM表示。深度学习通常用于对数据进行分类。但是,深度学习可能会遇到灾难性的遗忘:在学习的任何阶段,内存中不可预测的部分可能崩溃。即使做出一些准确的分类,它们也无法解释,因此不能放心使用。深度学习与反向传播算法共享这些问题,反向传播算法在1980年代描述了由于失配学习过程中的非局部权重传输而导致的计算问题。深度学习在非常快的计算机和庞大的在线数据库变得可用之后变得流行起来,尽管存在这些问题,但启用了新的应用程序。自适应共振理论(ART)算法克服了反向传播和深度学习的计算问题。ART是一个自组织的生产系统,它使用无监督和有监督的学习的任意组合以及仅本地可计算的量来增量学习,快速分类大型非固定数据库,而不会遭受灾难性的遗忘。可以使用建立在STM上的关键特征模式来解释ART分类和预测。模糊ARTMAP算法的LTM自适应权重产生模糊的IF-THEN规则,该规则解释了哪些特征组合可以预测成功的结果。ART已成功用于多种大型现实应用中,包括遥感,医学数据库预测和社交媒体数据聚类。也可以解释的是增强学习和认知情感互动的MOTIVATOR模型,以及用于到达,语音生成,空间导航和自主自适应智能的VITE,DIRECT,DIVA和SOVEREIGN模型。这些生物学模型是互补计算的例证,
更新日期:2020-06-25
down
wechat
bug