Skip to main content
Log in

Modeling of complex internal logic for knowledge base completion

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

Knowledge base completion has been an active research topic for knowledge graph. However, existing methods are of low learning and generalization abilities, and neglect the rich internal logic between entities and relationships. To solve the above problems, this paper proposes the modeling of complex internal logic for knowledge base completion. This method first integrates the semantic information into the knowledge representation model and strengthens the credibility scores of the positive and negative triples with the semantic gap, which not only makes the model converge faster, but also can obtain the knowledge representation of the fusion semantic information; and then we put forward the concept of knowledge subgraph, through the memory network and multi-hop attention mechanism, the knowledge information in the knowledge subgraph and the to-be-complemented triple are merged. In the process of model training, we have different training methods from the classical memory network, and added reinforcement learning. The reciprocal of the correct reasoning knowledge information in the model output is used as the reward value, and the final training model complements the triple information. The high computing capability of knowledge representation, the high learning and generalization abilities of the memory network and the multi-hop attention mechanism are also utilized in the method. The experimental results on data sets FB15k and WN18 show that the present method performs well in knowledge base completion and can effectively improve Hits@10 and MRR values. We also verified the practicability of the proposed method in the recommendation system and question answering system base on knowledge base, and have achieved good results.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  1. Bordes A, Usunier N, Garcia-duran A et al (2013) Translating embeddings for modeling multi-relational data[C]. In: Proceedings of the 27th advances inneural information processing systems, pp 2787–2795

  2. Wang Z, Zhang J, Feng J et al (2014) Knowledge graph embedding by translating on hyperplanes[C]. In: Proceedings of the 28th AAAI conference on artificial intelligence, pp 1112–1119

  3. He S, Liu K, Ji G et al (2015) Learning to represent knowledge graphs with gaussian embedding[C]. In: Proceedings of the 24th ACM international on conference on information and knowledge management. ACM, pp 623–632

  4. Ji G, He S, Xu L et al (2015) Knowledge graph embedding via dynamic mapping matrix[C]. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing, vol 1, pp 687–696

  5. Xie R, Liu Z, Sun M (2016) Representation learning of knowledge graphs with hierarchical types[C]. In: Proceedings of the 25th international joint conference on artificial intelligence, pp 2965–2971

  6. Guo S, Wang Q, Wang B, et al (2015) Semantically smooth knowledge graph embedding[C]. In: Proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing, vol 1, pp 84–94

  7. Ebisu T, Ichise R (2018) Toruse: knowledge graph embedding on a lie group[C]. In: Proceedings of the 32th AAAI conference on artificial intelligence, pp 1819–1826

  8. Sun Z, Deng ZH, Nie JY et al (2019) RotatE: knowledge graph embedding by relational rotation in complex space[C]. In: Proceedings of the 7th international conference on learning representations, pp 1–18

  9. Lin Y, Liu Z, Luan H et al (2015) Modeling relation paths for representation learning of knowledge bases[C]. In: Proceedings of the 2015 conference on empirical methods in natural language processing, pp 705–714

  10. Feng J, Huang M, Yang Y (2016) GAKE: graph aware knowledge embedding[C]. In: Proceedings of the 26th international conference on computational linguistics: technical papers, pp 641– 651

  11. Socher R, Chen D, Manning CD et al (2013) Reasoning with neural tensor networks for knowledge base completion[C]. In: Proceedings of the 27th advances in neural information processing systems, pp 926–934

  12. Trouillon T, Welbl J, Riedel S et al (2016) Complex embeddings for simple link prediction[C]. In: Proceedings of the international conference on machine learning, pp 2071–2080

  13. Shi B, Weninger T (2017) ProjE: embedding projection for knowledge graph completion[C]. In: Proceedings of the 31th AAAI conference on artificial intelligence, pp 1236–1242

  14. Yang F, Yang Z, Cohen WW (2017) Differentiable learning of logical rules for knowledge base reasoning[C]. In: Proceedings of the 31th neural information processing systems, pp 2319– 2328

  15. Graves A, Wayne G, Reynolds M et al (2016) Hybrid computing using a neural network with dynamic external memory[J]. Nature 538(7626):471–476

    Article  Google Scholar 

  16. Xie R, Liu Z, Jia J et al (2016) Representation learning of knowledge graphs with entity descriptions[C]. In: Proceedings of the 30th AAAI conference on artificial intelligence, pp 2659–2665

  17. Schlichtkrull M, Kipf TN, Bloem P et al (2018) Modeling relational data with graph convolutional networks[C]. In: Proceedings of the 15th European semantic web conference. Springer, Cham, pp 593–607

  18. Dettmers T, Minervini P, Stenetorp P et al (2018) Convolutional 2d knowledge graphembeddings[C]. In: Proceedings of the 32th AAAI conference on artificial intelligence, pp 1811–1818

  19. Shang C, Tang Y, Huang J et al (2019) End-to-end structure-aware convolutional networks for knowledge base completion[C]. In: Proceedings of the 33th AAAI conference on artificial intelligence

  20. Wang Z, Zhang J, Feng J et al (2014) Knowledge graph and text jointly embedding[C]. In: Proceedings of the 2014 conference on empirical methods in natural language processing, pp 1591– 1601

  21. Yih W, Richardson M, Meek C et al (2016) The value of semantic parse labeling for knowledge base question answering[C]. In: Proceedings of the 54th annual meeting of the association for computational linguistics, vol 2, pp 201–206

  22. Wang H, Zhang F, Wang J et al (2018) Ripplenet: propagating user preferences on the knowledge graph for recommender systems[C]. In: Proceedings of the 27th ACM international conference on information and knowledge management. ACM, 2018, pp 417–426

  23. Wang H, Zhang F, Zhao M et al (2019) Multi-task feature learning for knowledge graph enhanced recommendation[C]. In: The world wide web conference. ACM, 2019, pp 2000–2010

  24. Wang H, Zhao M, Xie X et al (2019) Knowledge graph convolutional networks for recommender systems[C]. In: The world wide web conference. ACM, 2019, pp 3307–3313

  25. Sun H, Dhingra B, Zaheer M et al (2018) Open domain question answering using early fusion of knowledge bases and text[J]. arXiv:1809.00782

  26. Xiong W, Yu M, Chang S et al (2019) Improving question answering over incomplete KBs with knowledge-aware reader[J]. arXiv:1905.07098

  27. Paulus R, Xiong C, Socher R (2018) A deep reinforced model for abstractive summarization[C]. In: Proceedings of the 6th international conference on learning representations

Download references

Acknowledgements

This work was supported by the National Key R&D Program of China (2018YFB1402900) and the Nature and Science Foundation of China (61966020).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhengtao Yu.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wang, H., Jiang, S. & Yu, Z. Modeling of complex internal logic for knowledge base completion. Appl Intell 50, 3336–3349 (2020). https://doi.org/10.1007/s10489-020-01734-z

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-020-01734-z

Keywords

Navigation