当前位置: X-MOL 学术IEEE Trans. Ind. Inform. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Attention-Aware Encoder–Decoder Neural Networks for Heterogeneous Graphs of Things
IEEE Transactions on Industrial Informatics ( IF 12.3 ) Pub Date : 2020-09-22 , DOI: 10.1109/tii.2020.3025592
Yangfan Li , Cen Chen , Mingxing Duan , Zeng Zeng , Kenli Li

Recent trend focuses on using heterogeneous graph of things (HGoT) to represent things and their relations in the Internet of Things, thereby facilitating the applying of advanced learning frameworks, i.e., deep learning (DL). Nevertheless, this is a challenging task since the existing DL models are hard to accurately express the complex semantics and attributes for those heterogeneous nodes and links in HGoT. To address this issue, we develop attention-aware encoder–decoder graph neural networks for HGoT, termed as HGAED. Specifically, we utilize the attention-based separate-and-merge method to improve the accuracy, and leverage the encoder–decoder architecture for implementation. In the heart of HGAED, the separate-and-merge processes can be encapsulated into encoding and decoding blocks. Then, blocks are stacked for constructing an encoder–decoder architecture to jointly and hierarchically fuse heterogeneous structures and contents of nodes. Extensive experiments on three real-world datasets demonstrate the superior performance of HGAED over state-of-the-art baselines.

中文翻译:

事物异构图的注意力感知编码器-解码器神经网络

最近的趋势集中在使用异构的事物图(HGoT)表示事物及其在物联网中的关系,从而促进了高级学习框架(即深度学习)的应用。然而,这是一项具有挑战性的任务,因为现有的DL模型很难准确地表达HGoT中那些异构节点和链接的复杂语义和属性。为了解决这个问题,我们为HGoT开发了注意力感知的编码器-解码器图神经网络,称为HGAED。具体来说,我们利用基于注意力的分离合并方法来提高准确性,并利用编码器-解码器体系结构进行实施。在HGAED的核心中,分离合并过程可以封装到编码和解码块中。然后,块被堆叠以构建编码器-解码器体系结构,以联合和分层地融合异构结构和节点内容。在三个真实世界的数据集上进行的大量实验证明,HGAED的性能优于最新的基准。
更新日期:2020-09-22
down
wechat
bug