当前位置: X-MOL 学术IEEE Trans. Ind. Inform. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Attention-Aware Encoder__ecoder Neural Networks for Heterogeneous Graphs of Things
IEEE Transactions on Industrial Informatics ( IF 11.7 ) Pub Date : 9-22-2020 , DOI: 10.1109/tii.2020.3025592
Yangfan Li , Cen Chen , Mingxing Duan , Zeng Zeng , Kenli Li

Recent trend focuses on using heterogeneous graph of things (HGoT) to represent things and their relations in the Internet of Things, thereby facilitating the applying of advanced learning frameworks, i.e., deep learning (DL). Nevertheless, this is a challenging task since the existing DL models are hard to accurately express the complex semantics and attributes for those heterogeneous nodes and links in HGoT. To address this issue, we develop attention-aware encoder_decoder graph neural networks for HGoT, termed as HGAED. Specifically, we utilize the attention-based separate-and-merge method to improve the accuracy, and leverage the encoder_decoder architecture for implementation. In the heart of HGAED, the separate-and-merge processes can be encapsulated into encoding and decoding blocks. Then, blocks are stacked for constructing an encoder_decoder architecture to jointly and hierarchically fuse heterogeneous structures and contents of nodes. Extensive experiments on three real-world datasets demonstrate the superior performance of HGAED over state-of-the-art baselines.

中文翻译:


用于异构事物图的注意力感知编码器__ecoder 神经网络



最近的趋势集中在使用异构物图(HGoT)来表示物联网中的事物及其关系,从而促进高级学习框架,即深度学习(DL)的应用。然而,这是一项具有挑战性的任务,因为现有的深度学习模型很难准确表达 HGoT 中异构节点和链路的复杂语义和属性。为了解决这个问题,我们为 HGoT 开发了注意力感知编码器解码器图神经网络,称为 HGAED。具体来说,我们利用基于注意力的分离和合并方法来提高准确性,并利用编码器_解码器架构来实现。在 HGAED 的核心,分离和合并过程可以封装到编码和解码块中。然后,堆叠块以构建编码器_解码器架构,以联合且分层地融合节点的异构结构和内容。对三个真实世界数据集的广泛实验证明了 HGAED 的性能优于最先进的基线。
更新日期:2024-08-22
down
wechat
bug