当前位置: X-MOL 学术J. Grid Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
An Improved Method to Deploy Cache Servers in Software Defined Network-based Information Centric Networking for Big Data
Journal of Grid Computing ( IF 3.6 ) Pub Date : 2019-02-09 , DOI: 10.1007/s10723-019-09477-z
Jan Badshah , Muhammad Kamran , Nadir Shah , Shahbaz Akhtar Abid

Big data involves a large amount of data generation, storage, transfer from one place to another, and analysis to extract meaningful information. Information centric networking (ICN) is an infrastructure that transfers big data from one node to another node, and provides in-network caches. For software defined network-based ICN approach, a recently proposed centralized cache server architecture deploys single cache server based on path-stretch value. Despite the advantages of centralized cache in ICN, single cache server for a large network has scalability issue. Moreover, it only considers the path-stretch ratio for cache server deployment. Consequently, the traffic can not be reduced optimally. To resolve such issues, we propose to deploy multiple cache servers based on joint optimization of multiple parameters, namely: (i) closeness centrality; (ii) betweenness centrality; (iii) path-stretch values; and (iv) load balancing in the network. Our proposed approach first computes the locations and the number of cache servers based on the network topology information in an offline manner and the cache servers are placed at their corresponding locations in the network. Next, the controller installs flow rules at the switches such that the switches can forward the request for content to one of its nearest cache server. Upon reaching a content request, if the content request matches with the contents stored at the cache server, the content is delivered to the requesting node; otherwise, the request is forwarded to the controller. In the next step, controller computes the path such that the content provider first sends the content to the cache server. Finally, a copy of the content is forwarded to the requesting node. Simulation results confirmed that the proposed approach performs better in terms of traffic overhead and average end-to-end delay as compared to an existing state-of-the-art approach.

中文翻译:

在软件定义的基于网络的大数据信息中心网络中部署缓存服务器的改进方法

大数据涉及大量数据的生成,存储,从一个地方到另一个地方的传输以及分析以提取有意义的信息。以信息为中心的网络(ICN)是一种基础结构,可将大数据从一个节点传输到另一个节点,并提供网络内缓存。对于基于软件的基于网络的ICN方法,最近提出的集中式高速缓存服务器体系结构基于路径拉伸值部署单个高速缓存服务器。尽管ICN中具有集中式缓存的优点,但是用于大型网络的单个缓存服务器仍具有可伸缩性问题。此外,它仅考虑缓存服务器部署的路径拉伸比。因此,不能最佳地减少通信量。为了解决这些问题,我们建议基于多个参数的联合优化来部署多个缓存服务器,即:(i)紧密性中心;(ii)中间性中心;(iii)路径拉伸值;(iv)网络中的负载平衡。我们提出的方法首先基于网络拓扑信息以脱机方式计算缓存服务器的位置和数量,然后将缓存服务器放置在网络中它们对应的位置。接下来,控制器在交换机上安装流规则,以便交换机可以将对内容的请求转发到其最近的缓存服务器之一。在到达内容请求时,如果该内容请求与存储在高速缓存服务器上的内容匹配,则将该内容传递给请求节点;否则,该请求将转发给控制器。在下一步中,控制器计算路径,以便内容提供者首先将内容发送到缓存服务器。最后,内容的副本转发到请求节点。仿真结果证实,与现有技术水平相比,该方法在流量开销和平均端到端延迟方面表现更好。
更新日期:2019-02-09
down
wechat
bug