当前期刊: IEEE Transactions on Cloud Computing Go to current issue    加入关注   
显示样式:        排序: 导出
我的关注
我的收藏
您暂时未登录!
登录
  • A Logic-Based Benders Decomposition Approach for the VNF Assignment Problem
    IEEE Trans. Cloud Comput. (IF 5.967) Pub Date : 2017-06-02
    Sara Ayoubi; Samir Sebbah; Chadi Assi

    Middleboxes have gained popularity due to the significant value-added services these network elements provide to traffic flows, in terms of enhanced performance and security. Policy-aware traffic flows usually need to traverse multiple middleboxes in a predefined order to satisfy their associated policy, also known as Service Function Chaining . Typically, Middleboxes run on specialized hardware, which make them highly inflexible to handle the unpredictable and fluctuating-nature of traffic, and contribute to significant capital and operational expenditures (Cap-ex and Op-ex) to provision, accommodate, and maintain them. Network Function Virtualization is a promising technology with the potential to tackle the aforementioned limitations of hardware middleboxes. Yet, NFV is still in its infancy, and there exists several technical challenges that need to be addressed, among which, the Virtual Network Function assignment problem tops the list. The VNF assignment problem stems from the newly gained flexibility in instantiating VNFs (on-demand) anywhere in the network. Subsequently, network providers must decide on the optimal placement of VNF instances which maximizes the number of admitted policy-aware traffic flows across their network. Existing work consists of Integer Linear Program (ILP) models, which are fairly unscalable, or heuristic-based approaches with no guarantee on the quality of the obtained solutions. This work proposes a novel Logic-Based Benders Decomposition (LBBD) based approach to solve the VNF assignment problem. It consists of decomposing the problem into two subproblems: a master and a subproblem; and at every iteration constructive Benders cuts are introduced to the master to tighten its search space. We compared the LBBD approach against the ILP and a heuristic method, and we show that our approach achieves the optimal solution (as opposed to heuristic-based methods) 700 times faster than the ILP.

    更新日期:2020-01-04
  • An Adaptive and Fuzzy Resource Management Approach in Cloud Computing
    IEEE Trans. Cloud Comput. (IF 5.967) Pub Date : 2017-08-03
    Parinaz Haratian; Faramarz Safi-Esfahani; Leili Salimian; Akbar Nabiollahi

    Resource management plays a key role in the cloud-computing environment in which applications face with dynamically changing workloads. However, such dynamic and unpredictable workloads can lead to performance degradation of applications, especially when demands for resources are increased. To meet Quality of Service (QoS) requirements based on Service Level Agreements (SLA), resource management strategies must be taken into account. The question addressed in this research includes how to reduce the number of SLA violations based on the optimization of resources allocated to users applying an autonomous control cycle and a fuzzy knowledge management system. In this paper, an adaptive and fuzzy resource management framework (AFRM) is proposed in which the last resource values of each virtual machine are gathered through the environment sensors and are sent to a fuzzy controller. Then, AFRM analyzes the received information to make decision on how to reallocate the resources in each iteration of a self-adaptive control cycle. All the membership functions and rules are dynamically updated based on workload changes to satisfy QoS requirements. Two sets of experiments were conducted on the storage resource to examine AFRM in comparison to rule-based and static-fuzzy approaches in terms of RAE, utility, number of SLA violations, and cost applying HIGH, MEDIUM, MEDIUM-HIGH, and LOW workloads. The results reveal that AFRM outweighs the rule-based and static-fuzzy approaches from several aspects.

    更新日期:2020-01-04
  • Application-Aware Big Data Deduplication in Cloud Environment
    IEEE Trans. Cloud Comput. (IF 5.967) Pub Date : 2017-05-31
    Yinjin Fu; Nong Xiao; Hong Jiang; Guyu Hu; Weiwei Chen

    Deduplication has become a widely deployed technology in cloud data centers to improve IT resources efficiency. However, traditional techniques face a great challenge in big data deduplication to strike a sensible tradeoff between the conflicting goals of scalable deduplication throughput and high duplicate elimination ratio. We propose AppDedupe , an application-aware scalable inline distributed deduplication framework in cloud environment, to meet this challenge by exploiting application awareness, data similarity and locality to optimize distributed deduplication with inter-node two-tiered data routing and intra-node application-aware deduplication. It first dispenses application data at file level with an application-aware routing to keep application locality, then assigns similar application data to the same storage node at the super-chunk granularity using a handprinting-based stateful data routing scheme to maintain high global deduplication efficiency, meanwhile balances the workload across nodes. AppDedupe builds application-aware similarity indices with super-chunk handprints to speedup the intra-node deduplication process with high efficiency. Our experimental evaluation of AppDedupe against state-of-the-art, driven by real-world datasets, demonstrates that AppDedupe achieves the highest global deduplication efficiency with a higher global deduplication effectiveness than the high-overhead and poorly scalable traditional scheme, but at an overhead only slightly higher than that of the scalable but low duplicate-elimination-ratio approaches.

    更新日期:2020-01-04
  • Cloud Resource Management for Analyzing Big Real-Time Visual Data from Network Cameras
    IEEE Trans. Cloud Comput. (IF 5.967) Pub Date : 2017-06-27
    Ahmed S. Kaseb; Anup Mohan; Youngsol Koh; Yung-Hsiang Lu

    Thousands of network cameras stream real-time visual data for different environments, such as streets, shopping malls, and natural scenes. The big visual data from these cameras can be useful for many applications, but analyzing the large quantities of data requires significant amounts of resources. These resources can be obtained from cloud vendors offering cloud instances (referred to as instances in this paper) with different capabilities and hourly costs. It is a challenging problem to manage cloud resources to reduce the cost for analyzing the big real-time visual data from network cameras while meeting the performance requirements. That is because the problem is affected by many factors related to the analysis programs, the cameras, and the instances. This paper proposes a cloud resource manager (referred to as manager in this paper) that aims at solving this problem. The manager estimates the resource requirements of analyzing the data stream from each camera, formulates the resource allocation problem as a 2D vector bin packing problem, and solves it using a heuristic algorithm. The resource manager monitors the allocated instances; it allocates more instances if needed and deallocates existing instances to reduce the cost if possible. The experiments show that the resource manager is able to reduce up to 60 percent of the overall cost. The experiments use multiple analysis programs, such as moving objects detection, feature tracking, and human detection. One experiment analyzes more than 97 million images (3.3 TB of data) from 5,310 cameras simultaneously over 24 hours using 15 Amazon EC2 instances costing $188.

    更新日期:2020-01-04
  • Cloudy Knapsack Algorithm for Offloading Tasks from Large Scale Distributed Applications
    IEEE Trans. Cloud Comput. (IF 5.967) Pub Date : 2017-07-04
    Harisankar Haridas; Sriram Kailasam; Janakiram Dharanipragada

    Offloading of tasks to the cloud is one of the approaches to improve the performance of distributed applications. When monetary constraints are present, selection of the tasks to be offloaded becomes important in order to ensure efficient use of the available cloud resources. This becomes a challenge for large scale distributed applications as the decisions on offloading have to be made locally at the nodes without an exact global view of the system. In our earlier work, we modeled this challenge as a new class of formal problems termed cloudy knapsack problem and derived some theoretical bounds on the solution space for worst case task sequences. In many real world applications, the task sequences have inherent patterns which can be exploited to improve offloading. In this work, we propose a cloud offloading algorithm that exploits these patterns through offline and online learning. Experimental evaluation using realistic datasets for a cloud-assisted peer-to-peer search case study reveals that the proposed solution performs close to a hypothetical omniscient offloading algorithm having a complete view of the system. The proposed cloud-assisted peer-to-peer search engine provides a cost-effective approach to address scalability bottleneck in peer-to-peer search engines.

    更新日期:2020-01-04
  • Dynamic Resource Provisioning for Energy Efficient Cloud Radio Access Networks
    IEEE Trans. Cloud Comput. (IF 5.967) Pub Date : 2017-06-15
    Nuo Yu; Zhaohui Song; Hongwei Du; Hejiao Huang; Xiaohua Jia

    Energy saving is critical for the cloud radio access networks (C-RANs), which are composed by massive radio access units (RAUs) and energy-intensive computing units (CUs) that host numerous virtual machines (VMs). We attempt to minimize the energy consumption of C-RANs, by leveraging the RAU sleep scheduling and VM consolidation strategies. We formulate the energy saving problem in C-RANs as a joint resource provisioning (JRP) problem of the RAUs and CUs. Since the active RAU selection is coupled with the VM consolidation, the JRP problem shares some similarities with a special bin-packing problem. In this problem, the number of items and the sizes of items are correlated and are both adjustable. No existing method can be used to solve this problem directly. Therefore, we propose an efficient low-complexity algorithm along with a context-aware strategy to dynamically select active RAUs and consolidate VMs to CUs. In this way, we can significantly reduce the energy consumption of C-RANs, while do not incur too much overhead due to VM migrations. Our proposed scheme is practical for a large-size network, and its effectiveness is demonstrated by the simulation results.

    更新日期:2020-01-04
  • Error Concealment for Cloud–Based and Scalable Video Coding of HD Videos
    IEEE Trans. Cloud Comput. (IF 5.967) Pub Date : 2017-08-01
    Muhammad Usman; Xiangjian He; Kin-Man Lam; Min Xu; Syed Mohsin Matloob Bokhari; Jinjun Chen; Mian Ahmad Jan

    The encoding of HD videos faces two challenges: requirements for a strong processing power and a large storage space. One time-efficient solution addressing these challenges is to use a cloud platform and to use a scalable video coding technique to generate multiple video streams with varying bit-rates. Packet-loss is very common during the transmission of these video streams over the Internet and becomes another challenge. One solution to address this challenge is to retransmit lost video packets, but this will create end-to-end delay. Therefore, it would be good if the problem of packet-loss can be dealt with at the user's side. In this paper, we present a novel system that encodes and stores the videos using the Amazon cloud computing platform, and recover lost video frames on user side using a new Error Concealment (EC) technique. To efficiently utilize the computation power of a user's mobile device, the EC is performed based on a multiple-thread and parallel process. The simulation results clearly show that, on average, our proposed EC technique outperforms the traditional Block Matching Algorithm (BMA) and the Frame Copy (FC) techniques.

    更新日期:2020-01-04
  • Facilitating Secure and Efficient Spatial Query Processing on the Cloud
    IEEE Trans. Cloud Comput. (IF 5.967) Pub Date : 2017-07-11
    Ayesha M. Talha; Ibrahim Kamel; Zaher Al Aghbari

    Database outsourcing is a common cloud computing paradigm that allows data owners to take advantage of its on-demand storage and computational resources. The main challenge is maintaining data confidentiality with respect to untrusted parties i.e., cloud service provider, as well as providing relevant query results in real-time to authenticated users. Existing approaches either compromise confidentiality of the data or suffer from high communication cost between the server and the user. To overcome this problem, we propose a dual transformation and encryption scheme for spatial data, where encrypted queries are executed entirely at the service provider on the encrypted database and encrypted results are returned to the user. The user issues encrypted spatial range queries to the service provider and then uses the encryption key to decrypt the query response returned. This allows a balance between the security of data and efficient query response as the queries are processed on encrypted data at the cloud server. Moreover, we compare with existing approaches on large datasets and show that this approach reduces the average query communication cost between the authorized user and service provider, as only a single round of communication is required by the proposed approach.

    更新日期:2020-01-04
  • Fast Phrase Search for Encrypted Cloud Storage
    IEEE Trans. Cloud Comput. (IF 5.967) Pub Date : 2017-05-29
    Hoi Ting Poon; Ali Miri

    Cloud computing has generated much interest in the research community in recent years for its many advantages, but has also raise security and privacy concerns. The storage and access of confidential documents have been identified as one of the central problems in the area. In particular, many researchers investigated solutions to search over encrypted documents stored on remote cloud servers. While many schemes have been proposed to perform conjunctive keyword search, less attention has been noted on more specialized searching techniques. In this paper, we present a phrase search technique based on Bloom filters that is significantly faster than existing solutions, with similar or better storage and communication cost. Our technique uses a series of $n$n -gram filters to support the functionality. The scheme exhibits a trade-off between storage and false positive rate, and is adaptable to defend against inclusion-relation attacks. A design approach based on an application's target false positive rate is also described.

    更新日期:2020-01-04
  • Fault Tolerant Stencil Computation on Cloud-Based GPU Spot Instances
    IEEE Trans. Cloud Comput. (IF 5.967) Pub Date : 2017-05-31
    Jun Zhou; Yan Zhang; Weng-Fai Wong

    This paper describes a fault tolerant framework for distributed stencil computation on cloud-based GPU clusters. It uses pipelining to overlap the data movement with computation in the halo region as well as parallelises data movement within the GPUs. Instead of running stencil codes on traditional clusters and supercomputers, the computation is performed on the Amazon Web Service GPU cloud, and utilizes its spot instances to improve cost-efficiency. The implementation is based on a low-cost fault-tolerant mechanism to handle the possible termination of the spot instances. Coupled with a price bidding module, our stencil framework not only optimizes for performance but also for cost. Experimental results show that our framework outperforms the state-of-the-art solutions achieving a peak of 25 TFLOPS for 2-D decomposition running on 512 nodes. We also show that the use of spot instances yields good cost-efficiency, increasing the average TFLOPS/USD from 132 to 360.

    更新日期:2020-01-04
  • Game-Theory Based Power and Spectrum Virtualization for Optimizing Spectrum Efficiency in Mobile Cloud-Computing Wireless Networks
    IEEE Trans. Cloud Comput. (IF 5.967) Pub Date : 2017-07-13
    Xi Zhang; Qixuan Zhu

    Mobile cloud-computing is a wireless network environment that focuses on sharing the publicly available wireless resources. Wireless network virtualization provides an efficient technique to implement the mobile cloud-computing by enabling multiple virtual wireless networks to be mapped onto one physical substrate wireless network. One of the most important challenges of this technique lies in how to efficiently allocate the wireless resources of physical wireless networks to the multiple virtual wireless network users. To overcome these difficulties, in this paper we propose a set of novel game-theory based schemes to resolve the wireless resources allocation problem in terms of transmit power and wireless spectrum. We formulate this wireless resources allocation problem as the gaming process where each mobile user bids for the limited wireless resources from physical substrate wireless networks, and competes with the other mobile-user players bidding for the same resources. Under our proposed game-theory framework, we develop three types of wireless resources request strategies: price-based strategy, correlation-based strategy, and water-filling-based strategy to allocate wireless resources under three different gaming mechanisms. The extensive simulation results obtained validate and evaluate our proposed schemes.

    更新日期:2020-01-04
  • Hierarchical Stochastic Models for Performance, Availability, and Power Consumption Analysis of IaaS Clouds
    IEEE Trans. Cloud Comput. (IF 5.967) Pub Date : 2017-10-09
    Ehsan Ataie; Reza Entezari-Maleki; Leila Rashidi; Kishor S. Trivedi; Danilo Ardagna; Ali Movaghar

    Infrastructure as a Service (IaaS) is one of the most significant and fastest growing fields in cloud computing. To efficiently use the resources of an IaaS cloud, several important factors such as performance, availability, and power consumption need to be considered and evaluated carefully. Evaluation of these metrics is essential for cost-benefit prediction and quantification of different strategies which can be applied to cloud management. In this paper, analytical models based on Stochastic Reward Nets (SRNs) are proposed to model and evaluate an IaaS cloud system at different levels. To achieve this, an SRN is initially presented to model a group of physical machines which are controlled by a management layer. Afterwards, the SRN models presented for the groups of physical machines in the first stage are combined to capture a monolithic model representing an entire IaaS cloud. Since the monolithic model does not scale well for large cloud systems, two approximate SRN models using folding and fixed-point iteration techniques are proposed to evaluate the performance, availability, and power consumption of the IaaS cloud. The existence of a solution for the fixed-point approximate model is proved using Brouwer's fixed-point theorem. A validation of the proposed monolithic and approximate models against both an ad-hoc discrete-event simulator developed in Java and the CloudSim framework is presented. The analytic-numeric results obtained from applying the proposed models to sample cloud systems show that the errors introduced by approximate models are insignificant while an improvement of several orders of magnitude in the state space reduction of the monolithic model is obtained.

    更新日期:2020-01-04
  • Learn to Play Maximum Revenue Auction
    IEEE Trans. Cloud Comput. (IF 5.967) Pub Date : 2017-06-05
    Xiaotie Deng; Tao Xiao; Keyu Zhu

    Auctions for allocating resources and determining prices have become widely applied for services over the Internet, Cloud Computing, and Internet of Things in recent years. Very often, such auctions are conducted multiple times. They may be expected to gradually reveal participants’ true value distributions, with which, it eventually would result in a possibility to fully apply the celebrated Myerson's optimal auction to extract the maximum revenue, in comparison to all truthful protocols. There is however a subtlety in the above reasoning as we are facing a problem of exploration and exploitation, i.e., a task of learning the distribution and a task of applying the learned knowledge to revenue maximization. In this work, we make the first step effort to understand what economic settings would make this double task possible exactly or approximately. The question opens up greater challenges in the wider areas where auctions are conducted repeatedly with a possibility of improved revenue in the dynamic process, most interestingly in auctioning cloud resources.

    更新日期:2020-01-04
  • Robust Performance-Based Resource Provisioning Using a Steady-State Model for Multi-Objective Stochastic Programming
    IEEE Trans. Cloud Comput. (IF 5.967) Pub Date : 2016-09-12
    Kyle M. Tarplee; Anthony A. Maciejewski; Howard Jay Siegel

    Cloud computing has enabled entirely new business models for high-performance computing. Having a dedicated local high-performance computer is still an option for some, but more are turning to cloud computing resources to fulfill their high-performance computing needs. With cloud computing it is possible to tailor your computing infrastructure to perform best for your particular type of workload by selecting the correct number of machines of each type. This paper presents an efficient algorithm to find the best set of computing resources to allocate to the workload. This research is applicable to users provisioning cloud computing resources and to data center owners making purchasing decisions about physical hardware. Studies have shown that cloud computing machines have measurable variability in their performance. Some of the causes of performance variability include small changes in architecture, location within the datacenter, and neighboring applications consuming shared network resources. The proposed algorithm models the uncertainty in the computing resources and the variability in the tasks in a many-task computing environment to find a robust number of machines of each type necessary to process the workload. In addition, reward rate, cost, failure rate, and power consumption can be optimized, as desired, to compute Pareto fronts.

    更新日期:2020-01-04
  • Service Chaining for Hybrid Network Function
    IEEE Trans. Cloud Comput. (IF 5.967) Pub Date : 2017-06-29
    Huawei Huang; Song Guo; Jinsong Wu; Jie Li

    In the Service-Function-Chaining (SFC) enabled networks, various sophisticated policy-aware network functions, such as intrusion detection, access control and unified threat management, can be realized in either physical middleboxes or virtualized network function (VNF) appliances. In this paper, we study the service chaining towards the hybrid SFC clouds, where both physical appliances and VNF appliances provide services collaboratively. In such hybrid SFC networks, the challenge is how to efficiently steer the service chains for traffic demands while matching their individual policy chains concurrently such that a utility associated with the total admitted traffic rate and the induced overheads can be maximized. We find such problem has not been well solved so far. To this end, we devise a Markov Approximation (MA) based algorithm. The approximation property of the proposed algorithm is also proved. Extensive evaluation results show that the proposed MA algorithm can yield near-optimal solutions and outperform other benchmark algorithms significantly.

    更新日期:2020-01-04
  • Shaving Data Center Power Demand Peaks Through Energy Storage and Workload Shifting Control
    IEEE Trans. Cloud Comput. (IF 5.967) Pub Date : 2017-08-25
    Mehiar Dabbagh; Bechir Hamdaoui; Ammar Rayes; Mohsen Guizani

    This paper proposes efficient strategies that shave Data Centers (DCs)’ monthly peak power demand with the aim of reducing the DCs’ monthly expenses. Specifically, the proposed strategies allow to decide: $i)$i) when and how much of the DC's workload should be delayed given that the workload is made up of multiple classes where each class has a certain delay tolerance and delay cost, and $ii)$ii) when and how much energy should be charged/discharged into DCs’ batteries. We first consider the case where the DC's power demands throughout the whole billing cycle are known and present an optimal peak shaving control strategy for it. We then relax this assumption and propose an efficient control strategy for the case when (accurate/noisy) predictions of the DC's power demands are only known for short durations in the future. Several comparative studies based on real traces from a Google DC are conducted in order to validate the proposed techniques.

    更新日期:2020-01-04
  • Stackelberg Game for Energy-Aware Resource Allocation to Sustain Data Centers Using RES
    IEEE Trans. Cloud Comput. (IF 5.967) Pub Date : 2017-06-15
    Gagangeet Singh Aujla; Mukesh Singh; Neeraj Kumar; Albert Y. Zomaya

    Smart Grid (SG) has emerged as one of the most powerful technologies of the modern era for an efficient energy management by integrating information and communication technologies (ICT) in the existing infrastructure. Among various ICT, cloud computing (CC) has emerged as one of the leading service providers which uses geo-distributed data centers (DCs) to serve the requests of users in SG. In recent times, with an increase in service requests by end users for various resources, there has been an exponential increase in the number of servers deployed at various DCs. With an increase in the size, the energy consumption of DCs has increased many folds which leads to an increase in overall operational cost of DCs. However, efficient resource allocation among these geo-distributed DCs may play a vital role in reducing the energy consumption of DCs. Moreover, with an increase in harmful emissions, the use of renewable energy sources (RES) can benefit DCs, SG, and society at large. Keeping focus on these points, in this paper, an energy-aware resource allocation scheme is proposed using a Stackelberg game for energy management in cloud-based DCs. For this purpose, a cloud controller is used to receive the requests of users which then distributes these requests among geo-distributed DCs in such a way that the energy consumption of DCs is sustained by RES. However, if energy consumption of DCs is not sustained by RES then the energy is drawn from the grid. The requests of users are routed to the DC which is offered lowest energy tariff from the grid. For this purpose, a Stackelberg game for energy trading is also proposed to select the grid offering lowest energy tariff to DCs. The proposed scheme is evaluated using various performance metrics using Google workload traces. The results obtained show the effectiveness of the proposed scheme.

    更新日期:2020-01-04
  • Towards Declarative and Data-Centric Virtual Machine Image Management in IaaS Clouds
    IEEE Trans. Cloud Comput. (IF 5.967) Pub Date : 2017-07-17
    Haikun Liu; Bingsheng He; Xiaofei Liao; Hai Jin

    Virtual machine image (VMI) management has become one of the key infrastructure components in Infrastructure as a Service (IaaS) cloud systems. Any “good” VMI management system should support flexible and efficient VMI services to cloud users, and offer scalable, easy-to-maintain and efficient VMI management for cloud providers. While there have been a number of systems and optimizations for VMI management, this paper investigates a declarative and data-centric approach to VMI management for both cloud users and providers. Specifically, by viewing VMI management as a data-intensive application, we propose Hemera , a novel VMI management system prototype based on relational database systems. Hemera adopts a data-centric approach to VMI management system design, where a VMI is modeled as structured data. With the data-centric approach, the key operations of VMI management can be recast naturally as programs based on SQL language. Moreover, Hemera embraces a series of automatic optimization opportunities with their root from databases. We have developed a system prototype based on MySQL. Our experimental results show the efficiency and feasibility of our declarative and data-centric approach to VMI Management.

    更新日期:2020-01-04
  • Two-Aggregator Topology Optimization Using Multiple Paths in Data Center Networks
    IEEE Trans. Cloud Comput. (IF 5.967) Pub Date : 2017-06-06
    Soham Das; Sartaj Sahni

    In this paper we focus on the problem of data aggregation using two aggregators in a data center network, where the source racks are allowed to split their data and send to the aggregators using multiple paths. We show that the problem of finding a topology that minimizes aggregation time is NP-hard for k = 2, 3, 4, where k is the maximum degree of each ToR switch (number of uplinks in a top-of-rack switch) in the data center. We also show that the problem becomes solvable in polynomial time for k = 5 and 6 and conjecture the same for k $>$> 6. Experimental results show that, for k = 6, our topology optimization algorithm reduces the aggregation time by as much as 83.32 percent and reduces total network traffic by as much as 99.5 percent relative to the torus heuristic, proposed in [1] , which readily proves the significant improvement in performance achieved by the proposed algorithm.

    更新日期:2020-01-04
  • Video Stream Analysis in Clouds: An Object Detection and Classification Framework for High Performance Video Analytics
    IEEE Trans. Cloud Comput. (IF 5.967) Pub Date : 2016-01-13
    Ashiq Anjum; Tariq Abdullah; M. Fahim Tariq; Yusuf Baltaci; Nick Antonopoulos

    Object detection and classification are the basic tasks in video analytics and become the starting point for other complex applications. Traditional video analytics approaches are manual and time consuming. These are subjective due to the very involvement of human factor. We present a cloud based video analytics framework for scalable and robust analysis of video streams. The framework empowers an operator by automating the object detection and classification process from recorded video streams. An operator only specifies an analysis criteria and duration of video streams to analyse. The streams are then fetched from a cloud storage, decoded and analysed on the cloud. The framework executes compute intensive parts of the analysis to GPU powered servers in the cloud. Vehicle and face detection are presented as two case studies for evaluating the framework, with one month of data and a 15 node cloud. The framework reliably performed object detection and classification on the data, comprising of 21,600 video streams and 175 GB in size, in 6.52 hours. The GPU enabled deployment of the framework took 3 hours to perform analysis on the same number of video streams, thus making it at least twice as fast than the cloud deployment without GPUs.

    更新日期:2020-01-04
  • Virtual Machine Migration Planning in Software-Defined Networks
    IEEE Trans. Cloud Comput. (IF 5.967) Pub Date : 2017-05-31
    Huandong Wang; Yong Li; Ying Zhang; Depeng Jin

    Live migration is a key technique for virtual machine (VM) management in data center networks, which enables flexibility in resource optimization, fault tolerance, and load balancing. Despite its usefulness, the live migration still introduces performance degradations during the migration process. Thus, there has been continuous efforts in reducing the migration time in order to minimize the impact. From the network's perspective, the migration time is determined by the amount of data to be migrated and the available bandwidth used for such transfer. In this paper, we examine the problem of how to schedule the migrations and how to allocate network resources for migration when multiple VMs need to be migrated at the same time. We consider the problem in the Software-defined Network (SDN) context since it provides flexible control on routing. More specifically, we propose a method that computes the optimal migration sequence and network bandwidth used for each migration. We formulate this problem as a mixed integer programming, which is NP-hard. To make it computationally feasible for large scale data centers, we propose an approximation scheme via linear approximation plus fully polynomial time approximation, and obtain its theoretical performance bound and computational complexity. Through extensive simulations, we demonstrate that our fully polynomial time approximation (FPTA) algorithm has a good performance compared with the optimal solution of the primary programming problem and two state-of-the-art algorithms. That is, our proposed FPTA algorithm approaches to the optimal solution of the primary programming problem with less than 10 percent variation and much less computation time. Meanwhile, it reduces the total migration time and service downtime by up to 40 and 20 percent compared with the state-of-the-art algorithms, respectively.

    更新日期:2020-01-04
  • When I/O Interrupt Becomes System Bottleneck: Efficiency and Scalability Enhancement for SR-IOV Network Virtualization
    IEEE Trans. Cloud Comput. (IF 5.967) Pub Date : 2017-06-06
    Jian Li; Shuai Xue; Wang Zhang; Ruhui Ma; Zhengwei Qi; Haibing Guan

    High performance networking interface cards (NIC) have become essential networking devices in commercial cloud computing environments. Therefore, efficient and scalable I/O virtualization is one of the primary challenges on virtualized cloud computing platforms. Single Root I/O Virtualization (SR-IOV) is a network interface technology that eliminates the overhead of redundant data copies and the virtual network switches through direct I/O in order to achieve nearly natural I/O performance. However, the SR-IOV still suffers from serious problems due to the high overhead for processing excessive network interrupts as well as the unpredictable and bursty traffic load in high-speed networking connections. In this paper, the defects of SR-IOV with 10 Gigabit Ethernet networking are studied first and two major challenges are identified: excessive interrupt rate and single threaded virtual network driver. Second, two interrupt rate control optimization schemes, called coarse-grained interrupt rate (CGR) control and adaptive interrupt rate (AIR) control are proposed. The proposed control schemes can significantly reduce the overhead and enhance the SR-IOV performance compared with the traditional driver with fixed interrupt throttle rate (FIR). In addition, multi-threaded VF driver (MTVD) is proposed that allows the SR-IOV VFs to leverage multi-core resources in order to achieve high scalability. Finally, these optimizations are implemented and detailed performance evaluations are conducted. The results show that CGR and AIR can improve the throughput by 2.26x and 2.97x while saving the CPU resources by 1.23 core and 1.44 core, respectively. The MTVD can achieve 2.03x performance with additional 1.46 cores consumption for VM using the SR-IOV driver.

    更新日期:2020-01-04
  • 2019 Reviewers List*
    IEEE Trans. Cloud Comput. (IF 5.967) Pub Date : 2019-12-04

    Presents the list of reviewers who contributed to this publication in 2019.

    更新日期:2020-01-04
Contents have been reproduced by permission of the publishers.
导出
全部期刊列表>>
2020新春特辑
限时免费阅读临床医学内容
ACS材料视界
科学报告最新纳米科学与技术研究
清华大学化学系段昊泓
自然科研论文编辑服务
加州大学洛杉矶分校
上海纽约大学William Glover
南开大学化学院周其林
课题组网站
X-MOL
北京大学分子工程苏南研究院
华东师范大学分子机器及功能材料
中山大学化学工程与技术学院
试剂库存
天合科研
down
wechat
bug