当前位置: X-MOL 学术IEEE Trans. Multimedia › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A Brain-Media Deep Framework Towards Seeing Imaginations Inside Brains
IEEE Transactions on Multimedia ( IF 7.3 ) Pub Date : 2020-06-01 , DOI: 10.1109/tmm.2020.2999183
Jianmin Jiang , AHMED FARES , Sheng-hua Zhong

While current research on multimedia is essentially dealing with the information derived from our observations of the world, internal activities inside human brains, such as imaginations and memories of past events etc., could become a brand new concept of multimedia, for which we coin as “brain-media”. In this paper, we pioneer this idea by directly applying natural images to stimulate human brains and then collect the corresponding electroencephalogram (EEG) sequences to drive a deep framework to learn and visualize the corresponding brain activities. By examining the relevance between the visualized image and the stimulation image, we are able to assess the performance of our proposed deep framework in terms of not only the quality of such visualization but also the feasibility of introducing the new concept of “brain-media”. To ensure that our explorative research is meaningful, we introduce a dually conditioned learning mechanism in the proposed deep framework. One condition is analyzing EEG sequences through deep learning to extract a more compact and class-dependent brain features via exploiting those unique characteristics of human brains such as hemispheric lateralization and biological neurons myelination (neurons importance), and the other is to analyze the content of images via computing approaches and extract representative visual features to exploit artificial intelligence in assisting our automated analysis of brain activities and their visualizations. By combining the brain feature space with the associated visual feature space of those images that are candidates of the stimuli, we are able to generate a combined-conditional space to support the proposed dual-conditioned and lateralization-supported GAN framework. Extensive experiments carried out illustrate that our proposed deep framework significantly outperforms the existing relevant work, indicating that our proposed does provide a good potential for further research upon the introduced concept of “brain-media”, a new member for the big family of multimedia. To encourage more research along this direction, we make our source codes publicly available for downloading at GitHub. 1

https://github.com/aneeg/LS-GAN.



中文翻译:

旨在看到大脑内想象力的大脑媒体深度框架

尽管当前对多媒体的研究基本上是在处理从我们对世界的观察中获得的信息,但是人脑内部的内部活动(例如,想象力和对过去事件的记忆等)可能会成为一种全新的多媒体概念,为此,我们将其称为“脑媒体”。在本文中,我们通过直接应用自然图像来刺激人的大脑,然后收集相应的脑电图(EEG)序列,以驱动一个深层框架来学习和可视化相应的大脑活动,从而开创了这一想法。通过检查可视化图像与刺激图像之间的相关性,我们不仅可以从可视化的质量上评估引入的深层框架的性能,还可以评估引入“脑媒体”新概念的可行性。 。为了确保我们的探索性研究有意义,我们在提出的深度框架中引入了双重条件学习机制。一种条件是通过深度学习来分析脑电图序列,以通过利用人脑的半球偏侧化和生物神经元髓鞘化(神经元重要性)等独特特征来提取更紧凑且依赖于类的脑部特征,另一条件是分析脑电图的内容。通过计算方法获取图像并提取代表性的视觉特征,以利用人工智能协助我们对大脑活动及其可视化进行自动分析。通过将大脑特征空间与那些作为刺激候选者的图像的相关视觉特征空间进行组合,我们能够生成组合条件空间以支持建议的双条件和侧向化支持的GAN框架。进行的大量实验表明,我们提出的深度框架明显优于现有的相关工作,这表明我们提出的深度框架确实为进一步研究引入的“大脑媒体”概念提供了良好的潜力,“大脑媒体”是多媒体大家族的新成员。为了鼓励沿着这个方向进行更多研究,我们公开了源代码,可从GitHub下载。这表明我们的建议确实为进一步引入“大脑媒体”概念提供了良好的潜力,而“大脑媒体”是多媒体大家族的新成员。为了鼓励沿着这个方向进行更多研究,我们公开了源代码,可从GitHub下载。这表明我们的建议确实为进一步引入“大脑媒体”概念提供了良好的潜力,而“大脑媒体”是多媒体大家族的新成员。为了鼓励沿着这个方向进行更多研究,我们公开了源代码,可从GitHub下载。 1个

https://github.com/aneeg/LS-GAN

更新日期:2020-06-01
down
wechat
bug