当前位置: X-MOL 学术Computing › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A cache-based method to improve query performance of linked Open Data cloud
Computing ( IF 3.3 ) Pub Date : 2020-05-14 , DOI: 10.1007/s00607-020-00814-9
Usman Akhtar , Anita Sant’Anna , Chang-Ho Jihn , Muhammad Asif Razzaq , Jaehun Bang , Sungyoung Lee

The proliferation of semantic big data has resulted in a large amount of content published over the Linked Open Data (LOD) cloud. Semantic Web applications consume these data by issuing SPARQL queries. One of the main challenges faced by querying the LOD web cloud on account of the inherent distributed nature of LOD is its high search latency and lack of tools to connect the SPARQL endpoints. In this paper, we propose an Adaptive Cache Replacement strategy (ACR) that aims to accelerate the overall query processing of the LOD cloud. ACR alleviates the burden on SPARQL endpoints by identifying subsequent queries learned from clients historical query patterns and caching the result of these queries. For cache replacement, we propose an exponential smoothing forecasting method to replace the less valuable cache content. In the experimental study, we evaluate the performance of the proposed approach in terms of hit rates, query time and overhead. The proposed approach is found to outperform existing state-of-the-art approaches, increase hit rates by 5.46%, and reduce the query times by 6.34%.

中文翻译:

一种基于缓存的方法来提高链接开放数据云的查询性能

语义大数据的激增导致在链接开放数据 (LOD) 云上发布了大量内容。语义 Web 应用程序通过发出 SPARQL 查询来使用这些数据。由于 LOD 固有的分布式特性,查询 LOD Web 云面临的主要挑战之一是其高搜索延迟和缺乏连接 SPARQL 端点的工具。在本文中,我们提出了一种自适应缓存替换策略 (ACR),旨在加速 LOD 云的整体查询处理。ACR 通过识别从客户端历史查询模式中学习到的后续查询并缓存这些查询的结果,减轻了 SPARQL 端点的负担。对于缓存替换,我们提出了一种指数平滑预测方法来替换不太有价值的缓存内容。在实验研究中,我们在命中率、查询时间和开销方面评估了所提出方法的性能。发现所提出的方法优于现有的最先进方法,将命中率提高了 5.46%,并将查询时间减少了 6.34%。
更新日期:2020-05-14
down
wechat
bug