当前位置: X-MOL 学术Comput. Intell. Neurosci. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Cross-Modal Search for Social Networks via Adversarial Learning.
Computational Intelligence and Neuroscience Pub Date : 2020-07-11 , DOI: 10.1155/2020/7834953
Nan Zhou 1 , Junping Du 1 , Zhe Xue 1 , Chong Liu 1 , Jinxuan Li 1
Affiliation  

Cross-modal search has become a research hotspot in the recent years. In contrast to traditional cross-modal search, social network cross-modal information search is restricted by data quality for arbitrary text and low-resolution visual features. In addition, the semantic sparseness of cross-modal data from social networks results in the text and visual modalities misleading each other. In this paper, we propose a cross-modal search method for social network data that capitalizes on adversarial learning (cross-modal search with adversarial learning: CMSAL). We adopt self-attention-based neural networks to generate modality-oriented representations for further intermodal correlation learning. A search module is implemented based on adversarial learning, through which the discriminator is designed to measure the distribution of generated features from intramodal and intramodal perspectives. Experiments on real-word datasets from Sina Weibo and Wikipedia, which have similar properties to social networks, show that the proposed method outperforms the state-of-the-art cross-modal search methods.

中文翻译:

通过对抗学习对社交网络进行跨模式搜索。

跨模式搜索已成为近年来的研究热点。与传统的跨模式搜索相比,社交网络跨模式信息搜索受任意文本和低分辨率视觉功能的数据质量限制。另外,来自社交网络的跨模式数据的语义稀疏性导致文本和视觉模式相互误导。在本文中,我们提出了一种利用对抗学习的社交网络数据跨模式搜索方法(带对抗学习的跨模式搜索:CMSAL)。我们采用基于自我注意的神经网络来生成面向模式的表示形式,以进行进一步的多式联运相关性学习。搜索模块是基于对抗性学习而实现的,通过该设计器,鉴别器可以从模内和模内角度测量生成特征的分布。对来自新浪微博和维基百科的具有与社交网络相似的属性的实词数据集进行的实验表明,该方法优于最新的交叉模式搜索方法。
更新日期:2020-07-13
down
wechat
bug