Channel Precoding Based Message Authentication in Wireless Networks: Challenges and Solutions IEEE Netw. (IF 7.23) Pub Date : 2018-05-17 Dajiang Chen; Ning Zhang; Rongxing Lu; Nan Cheng; Kuan Zhang; Zhiguang Qin
Due to the broadcast characteristic of the wireless medium, message impersonation and substitution attacks can possibly be launched by an adversary with low cost in wireless communication networks. As an ingenious solution, physical layer based message authentication can achieve perfect security by leveraging channel precoding techniques to meet high level security requirements. In this article, we focus on channel-precoding-based message authentication (CPC-based authentication) over a binary-input wiretap channel (BIWC). Specifically, message authentication with physical layer techniques is first reviewed. Then, a CPC-based authentication framework and its security requirements are presented. Based on the proposed framework, an authentication scheme with polar codes over a binary symmetric wiretap channel (BSWC) is developed. Moreover, a case study is provided as an example of message authentication with polar codes over BSWC. Finally, open research topics essential to CPC-based authentication are discussed.
Virtual Local-Hub: A Service Platform on the Edge of Networks for Wearable Devices IEEE Netw. (IF 7.23) Pub Date : 2018-05-17 Hsin-Peng Lin; Yuan-Yao Shih; Ai-Chun Pang; Chun-Ting Chou
With the rapid development of sensing and communication capacities, wearable technology, one of the most significant trends in the mobile computing evolution, has been changing our daily life. Wearable devices generally require a powerful local hub to replenish computing capacities for advanced features. However, it is inconvenient to carry the local hub in many situations, even though more and more wearable devices are equipped with a WiFi/cellular interface, enabling them to exchange data with the local hub through the Internet, However, this results in long response time and functional limitations. To overcome the restriction of a physical local hub, we propose a VLH solution, which utilizes network equipment nearby (e.g., a WiFi hotspot or cellular base station) as the local hub. In this article, we first describe the operating mechanism of a local hub and give an overview of the VLH system. Then we describe the system design of the VLH, including the container-based virtualization and the modified microservice architecture which enables remote function module sharing in the fog-computing environment. We then propose an algorithm to deal with function module allocation and sharing decisions. Finally, we demonstrate and verify the effectiveness and practicality of VLH via both simulations under large-scale network setting and real-world prototype implementation.
Socially-Motivated Cooperative Mobile Edge Computing IEEE Netw. (IF 7.23) Pub Date : 2018-05-17 Xu Chen; Zhi Zhou; Weigang Wu; Di Wu; Junshan Zhang
In this article we propose a novel paradigm of socially-motivated cooperative mobile edge computing, where the social tie structure among mobile and wearable device users is leveraged for achieving effective and trustworthy cooperation for collaborative computation task executions. We envision that a combination of local device computation and networked resource sharing empowers the devices with multiple flexible task execution approaches, including local mobile execution, D2D offloaded execution, direct cloud offloaded execution, and D2D-assisted cloud offloaded execution. Specifically, we propose a system model for cooperative mobile edge computing where a device social graph model is developed to capture the social relationship among the devices. We then devise a socially-aware bipartite matching based cooperative task offloading algorithm by integrating the social tie structure into the device computation and network resource sharing process. We evaluate the performance of socially-motivated cooperative mobile edge computing using both Erdos-Renyi and real-trace based social graphs, which corroborates the superior performance of the proposed socially-aware mechanism.
Measuring Instability of Mobility Management in Cellular Networks IEEE Netw. (IF 7.23) Pub Date : 2018-05-17 Xiaohui Zhao; Hanyang Ma; Yuan Jin; Jianguo Yao
Communication in cellular networks is based on serving cells that provide the basic network service. In the real world, serving cells overlap which means the number of serving cells covering one position is usually more than one. Recently, the instability of mobility management in cellular networks has been studied to monitor and analyze the handoff process in mobile devices. However, the handoff process is actually produced by base stations instead of mobile devices. Hence, it is of great importance to measure the handoff process of mobility management from the base station side. In this article, we present a series of experiments performed using the data obtained by mobile network operators. The contributions of this study are three-fold. We reproduce a handoff process and handoff loop from both the mobile device level and the base station level, and confirm the existence of a handoff loop by measurements from the base station side. Through large-range measurements, we discover that only a small part of serving cells is involved in the handoff process, and in most cases, the number of candidate serving cells is much smaller than the number of cells that cover some position; namely, when a handoff loop occurs, the number of candidate serving cells is quite small, which is in contrast to our assumption. We confirm that the handoff loop often occurs in indoor conditions or when the mobile device has frequent communication with the base station. Finally, we present several comprehensive facts about the handoff process and handoff loop and provide suggestions that can be used to increase the quality of service of cellular networks.
A Blockchain-Based Privacy-Preserving Payment Mechanism for Vehicle-to-Grid Networks IEEE Netw. (IF 7.23) Pub Date : 2018-04-16 Feng Gao; Liehuang Zhu; Meng Shen; Kashif Sharif; Zhiguo Wan; Kui Ren
As an integral part of V2G networks, EVs receive electricity from not only the grid but also other EVs and may frequently feed the power back to the grid. Payment records in V2G networks are useful for extracting user behaviors and facilitating decision-making for optimized power supply, scheduling, pricing, and consumption. Sharing payment and user information, however, raises serious privacy concerns in addition to the existing challenge of secure and reliable transaction processing. In this article, we propose a blockchain-based privacy preserving payment mechanism for V2G networks, which enables data sharing while securing sensitive user information. The mechanism introduces a registration and data maintenance process that is based on a blockchain technique, which ensures the anonymity of user payment data while enabling payment auditing by privileged users. Our design is implemented based on Hyperledger to carefully evaluate its feasibility and effectiveness.
Hierarchical CORD for NFV Datacenters: Resource Allocation with Cost-Latency Tradeoff IEEE Netw. (IF 7.23) Pub Date : 2018-04-13 Ying-Dar Lin; Chih-Chiang Wang; Chien-Ying Huang; Yuan-Cheng Lai
Network Function Virtualization (NFV) allows datacenters to consolidate network appliance functions onto commodity servers and devices. Currently telecommunication carriers are re-architecting their central offices as NFV datacenters that, along with SDN, help network service providers to speed deployment and reduce cost. However, it is still unclear how a carrier network shall organize its NFV datacenter resources into a coherent service architecture to support global network functional demands. This work proposes a hierarchical NFV/SDN-integrated architecture in which datacenters are organized into a multi-tree overlay network to collaboratively process user traffic flows. The proposed architecture steers traffic to a nearby datacenter to optimize user-perceived service response time. Our experimental results reveal that the 3-tier architecture is favored over others as it strikes a good balance between centralized processing and edge computing, and the resource allocation should be decided based on traffic’s source-destination attributes. Our results indicate that when most traffic flows within the same edge datacenter, the strategy whereby resources are concentrated at the carrier’s bottom-tier datacenters is preferred, but when most traffic flows across a carrier network or across different carrier networks, a uniform distribution over the datacenters or over the tiers, respectively, stands out from others.
REMT: A Real-Time End-to-End Media Data Transmission Mechanism in UAV-Aided Networks IEEE Netw. (IF 7.23) Pub Date : 2018-04-13 Jiajie Zhang; Jian Weng; Weiqi Luo; Jia-Nan Liu; Anjia Yang; Jiancheng Lin; Zhijun Zhang; Hailiang Li
In recent years, UAVs have received much attention in both the military and civilian fields for monitoring, emergency relief and searching tasks. UAVs are considered a new technology to obtain data at high altitudes when equipped with sensors. This technology is vital to the success of next-generation monitoring systems, which are expected to be reliable, real-time, efficient and secure. However, due to the bandwidth limitations in UAV-aided networks, the size of the transmitted data is a crucial factor for real-time media data transmission requirements, especially for national defense. To address this issue, in this article, we propose a real-time end-to-end media data transmission mechanism with an unsupervised deep neural network. The proposed mechanism transmutes the media data captured by UAVs into latent codes with a predefined constant size and transmits the codes to the ground console station (GCS) for further reconstruction. We use a real-word dataset containing millions of samples to evaluate the proposed mechanism which achieves a high transmission ratio, low resource usage and good visual quality.
Distributed and Efficient Object Detection in Edge Computing: Challenges and Solutions IEEE Netw. (IF 7.23) Pub Date : 2018-04-13 Ju Ren; Yundi Guo; Deyu Zhang; Qingqing Liu; Yaoxue Zhang
In the past decade, it was a significant trend for surveillance applications to send huge amounts of real-time media data to the cloud via dedicated high-speed fiber networks. However, with the explosion of mobile devices and services in the era of Internet-of-Things, it becomes more promising to undertake real-time data processing at the edge of the network in a distributed way. Moreover, in order to reduce the investment of network deployment, media communication in surveillance applications is gradually changing to be wireless. It consequently poses great challenges to detect objects at the edge in a distributed and communication-efficient way. In this article, we propose an edge computing based object detection architecture to achieve distributed and efficient object detection via wireless communications for real-time surveillance applications. We first introduce the proposed architecture as well as its potential benefits, and identify the associated challenges in the implementation of the architecture. Then, a case study is presented to show our preliminary solution, followed by performance evaluation results. Finally, future research directions are pointed out for further studies.
Achieving Ultra-Reliable Low-Latency Communications: Challenges and Envisioned System Enhancements IEEE Netw. (IF 7.23) Pub Date : 2018-04-02 Guillermo Pocovi; Hamidreza Shariatmadari; Gilberto Berardinelli; Klaus Pedersen; Jens Steiner; Zexian Li
URLLC have the potential to enable a new range of applications and services: from wireless control and automation in industrial environments to self-driving vehicles. 5G wireless systems are faced by different challenges for supporting URLLC. Some of the challenges, particularly in the downlink direction, are related to the reliability requirements for both data and control channels, the need for accurate and flexible link adaptation, reducing the processing time of data retransmissions, and the multiplexing of URLLC with other services. This article considers these challenges and proposes state-of-the-art solutions covering different aspects of the radio interface. In addition, system-level simulation results are presented, showing how the proposed techniques can work in harmony in order to fulfill the ambitious latency and reliability requirements of upcoming URLLC applications.
Wireless Access for Ultra-Reliable Low-Latency Communication: Principles and Building Blocks IEEE Netw. (IF 7.23) Pub Date : 2018-04-02 Petar Popovski; Jimmy J. Nielsen; Cedomir Stefanovic; Elisabeth de Carvalho; Erik Strom; Kasper F. Trillingsgaard; Alexandru-Sabin Bana; Dong Min Kim; Radoslaw Kotaba; Jihong Park; Rene B. Sorensen
URLLC is an important new feature brought by 5G, with a potential to support a vast set of applications that rely on mission-critical links. In this article, we first discuss the principles for supporting URLLC from the perspective of the traditional assumptions and models applied in communication/information theory. We then discuss how these principles are applied in various elements of system design, such as use of various diversity sources, design of packets, and access protocols. The important message is that there is a need to optimize the transmission of signaling information, as well as a need for lean use of various sources of diversity.
5G Radio Network Design for Ultra-Reliable Low-Latency Communication IEEE Netw. (IF 7.23) Pub Date : 2018-04-02 Joachim Sachs; Gustav Wikstrom; Torsten Dudda; Robert Baldemair; Kittipong Kittichokechai
5G is currently being standardized and addresses, among other things, new URLLC services. These are characterized by the need to support reliable communication, where successful data transmission can be guaranteed within low latency bounds, like 1 ms, at a low failure rate. This article describes the functionality of both the NR and LTE radio interfaces to provide URLLC services. Achievable latency bounds are evaluated, and the expected spectral efficiency is demonstrated. It is shown that both NR and LTE can fulfill the ITU 5G requirements on URLLC; however, this comes at the cost of reduced spectral efficiency compared to mobile broadband services without latency or reliability constraints. Still, the impact on the overall network performance is expected to be moderate.
Packet Duplication for URLLC in 5G: Architectural Enhancements and Performance Analysis IEEE Netw. (IF 7.23) Pub Date : 2018-04-02 Jaya Rao; Sophie Vrzic
URLLC use cases demand a new paradigm in cellular networks to contend with the extreme requirements with complex trade-offs. In general, it is exceptionally challenging and, resource usage-wise, prohibitively expensive to satisfy the URLLC requirements using the existing approaches in LTE. To address these challenges 3GPP has recently agreed to adopt PD of both UP and CP packets as a fundamental technique in 5G NR. This article investigates the theoretic framework behind PD and provides a primer on the recent enhancements applied in the NR RAN architecture for supporting URLLC. It is shown that PD enables jointly satisfying the latency and reliability requirements without increasing the complexity in the RAN. With dynamic control capability, PD can be used not only for URLLC but also to increase the transmission robustness during mobility and against radio link failures. The article also provides numerical results comparing the performance of PD in various deployment scenarios. The numerical results reveal that in certain scenarios, performing PD over multiple links results in lower usage of radio resources than using a single highly reliable link. It is also found that to improve radio resource utilization while satisfying URLLC requirements, enabling PD in scenarios such as cell edge is crucial where the average SNR of the best (primary) link and the variation in SNR between all accessible links is typically low. In essence, the PD technique provides a cost-effective solution for satisfying the URLLC requirements without requiring major modifications to the RAN deployments.
Handover Mechanism in NR for Ultra-Reliable Low-Latency Communications IEEE Netw. (IF 7.23) Pub Date : 2018-04-02 Hyun-Seo Park; Yuro Lee; Tae-Joong Kim; Byung-Chul Kim; Jae-Yong Lee
For many URLLC services, mobility is a key requirement together with latency and reliability. 3GPP has defined the target of MIT as 0 ms, and a general URLLC reliability requirement as 1 - 10-5 within a latency of 1 ms for 5G. In this article, we analyzed the impact of MIT and handover failure (HOF) rate on the reliability performance. From the analysis, at 120 km/h, with MIT of 0 ms, the required HOF rate to achieve 1 - 10-5 reliability is only 0.52 percent. Therefore, to achieve the reliability for URLLC, we need to minimize not only the MIT but also the HOF rate as close to zero as possible. Hence, we propose conditional make-before- break handover to target zero MIT and zero HOF rate simultaneously. The solution can achieve zero MIT by not releasing the connection to the source cell until the first or some downlink receptions from the target cell. It can achieve the zero HOF rate, by receiving an HO Command message when the radio link to the source cell is still stable, and by executing the handover when the connection to the target cell is preferable. Simulation results show that our proposed solution can achieve almost zero HOF rate even at 120 km/h.
Zero-Zero Mobility: Intra-Frequency Handovers with Zero Interruption and Zero Failures IEEE Netw. (IF 7.23) Pub Date : 2018-04-02 Ingo Viering; Henrik Martikainen; Andreas Lobinger; Bernhard Wegmann
Today's intra-frequency hard handovers in LTE suffer from interruption even if they are successful as well as risk of failures. The next generation, New Radio, will introduce stricter requirements that cannot be fulfilled with the traditional hard handover concept. Namely, a handover with 0 ms interruption is mandated, and extreme reliability (ultra-reliable low-latency communication services) will not tolerate any mobility failures. Consequently, softer handover concepts where the UE is multi-connected to a source and one or more target cells are already under discussion. This article investigates such a method using the well-known dual connectivity principle and evaluates the performance in terms of robustness/ reliability and signaling costs.
Energy Efficiency and Delay in 5G Ultra-Reliable Low-Latency Communications System Architectures IEEE Netw. (IF 7.23) Pub Date : 2018-04-02 Amitav Mukherjee
Emerging 5G URLLC wireless systems are characterized by minimal over-the-air latency and stringent decoding error requirements. The low latency requirements can cause conflicts with 5G EE design targets. Therefore, this work provides a perspective on various trade-offs between energy efficiency and user plane delay for upcoming URLLC systems. For network infrastructure EE, we propose solutions that optimize base station on-off switching and distributed access network architectures. For URLLC devices, we advocate solutions that optimize EE of discontinuous reception (DRX), mobility measurements, and the handover process, respectively, without compromising on delay.
Relaying-Enabled Ultra-Reliable Low-Latency Communications in 5G IEEE Netw. (IF 7.23) Pub Date : 2018-04-02 Yulin Hu; M. Cenk Gursoy; Anke Schmeink
Supporting URLLC has become one of the major considerations in the design of 5G systems. In the literature, it has been shown that cooperative relaying is an efficient strategy to improve the reliability of transmissions, support higher rates, and lower latency. However, prior studies have demonstrated the performance advantages of relaying generally under the ideal assumption of communicating arbitrarily reliably at Shannon's channel capacity, which is not an accurate performance indicator for relaying in URLLC networks in which transmission is required to be completed within a strict time span and coding schemes with relatively short blocklengths need to be employed. In this article, we address the performance modeling and optimization of relaying-enabled URLLC networks. We first discuss the accurate performance modeling of relay-enabled 5G networks. In particular, we provide a comprehensive summary of the performance advantage of applying relaying in 5G URLLC transmissions in comparison to the case of direct transmission (without relaying). Both a noise-limited scenario and an interference- limited scenario are discussed. Then we present tools for performance optimization utilizing the knowledge of either perfect or average channel side information. Finally, we summarize the proposed optimization schemes and discuss potential future research directions.
Enabling Ultra-Reliable and Low-Latency Communications through Unlicensed Spectrum IEEE Netw. (IF 7.23) Pub Date : 2018-04-02 Gordon J. Sutton; Jie Zeng; Ren Ping Liu; Wei Ni; Diep N. Nguyen; Beeshanga A. Jayawickrama; Xiaojing Huang; Mehran Abolhasan; Zhang Zhang
In this article, we aim to address the question of how to exploit the unlicensed spectrum to achieve URLLC. Potential URLLC PHY mechanisms are reviewed and then compared via simulations to demonstrate their potential benefits to URLLC. Although a number of important PHY techniques help with URLLC, the PHY layer exhibits an intrinsic trade-off between latency and reliability, posed by limited and unstable wireless channels. We then explore MAC mechanisms and discuss multi-channel strategies for achieving low-latency LTE unlicensed band access. We demonstrate, via simulations, that the periods without access to the unlicensed band can be substantially reduced by maintaining channel access processes on multiple unlicensed channels, choosing the channels intelligently, and implementing RTS/CTS.
Toward Low-Latency and Ultra-Reliable Virtual Reality IEEE Netw. (IF 7.23) Pub Date : 2018-04-02 Mohammed S. Elbamby; Cristina Perfecto; Mehdi Bennis; Klaus Doppler
VR is expected to be one of the killer applications in 5G networks. However, many technical bottlenecks and challenges need to be overcome to facilitate its wide adoption. In particular, VR requirements in terms of high throughput, low latency, and reliable communication call for innovative solutions and fundamental research cutting across several disciplines. In view of the above, this article discusses the challenges and enablers for ultra-reliable and low-latency VR. Furthermore, in an interactive VR gaming arcade case study, we show that a smart network design that leverages the use of mmWave communication, edge computing, and proactive caching can achieve the future vision of VR over wireless.
Professional Live Audio Production: A Highly Synchronized Use Case for 5G URLLC Systems IEEE Netw. (IF 7.23) Pub Date : 2018-04-02 Jens Pilz; Bernd Holfeld; Axel Schmidt; Konstantin Septinus
The fifth generation of cellular mobile communication networks is on the horizon and aims to integrate new vertical markets. In this article, we discuss professional wireless audio systems used for live productions as a future use case for 5G. Wireless live audio productions require high communication reliability as well as ultra-low signal delay. Furthermore, these services demand strict synchronization of devices to function properly. The need for low latency and for precise time and phase synchronization goes beyond what is currently under discussion in the context of URLLC. We seize on this aspect, discuss how isochronous data transmission can be implemented and integrated into 5G networks, and show similarities with other 5G verticals such as industrial automation.
Machine Learning for Networking: Workflow, Advances and Opportunities IEEE Netw. (IF 7.23) Pub Date : 2017-11-28 Mowei Wang; Yong Cui; Xin Wang; Shihan Xiao; Junchen Jiang
Recently, machine learning has been used in every possible field to leverage its amazing power. For a long time, the networking and distributed computing system is the key infrastructure to provide efficient computational resources for machine learning. Networking itself can also benefit from this promising technology. This article focuses on the application of MLN, which can not only help solve the intractable old network questions but also stimulate new network applications. In this article, we summarize the basic workflow to explain how to apply machine learning technology in the networking domain. Then we provide a selective survey of the latest representative advances with explanations of their design principles and benefits. These advances are divided into several network design objectives and the detailed information of how they perform in each step of MLN workflow is presented. Finally, we shed light on the new opportunities in networking design and community building of this new inter-discipline. Our goal is to provide a broad research guideline on networking with machine learning to help motivate researchers to develop innovative algorithms, standards and frameworks.
Dense-Device-Enabled Cooperative Networks for Efficient and Secure Transmission IEEE Netw. (IF 7.23) Pub Date : 2018-04-02 Shuai Han; Sai Xu; Weixiao Meng; Cheng Li
With the advancements in wireless networks, the number of user devices has increased dramatically, resulting in high device densities. Despite the resulting data traffic deluge, accompanied by severe security threats, wireless networks with high device densities are also breeding grounds for user cooperation. Considering various challenges and opportunities, this article attempts to enhance user cooperation utilizing big data generated from wireless networks toward achieving efficient and secure transmission. In particular, big data, viewed as a resource or tool, is employed to find potential connections among user devices, being followed by user cluster formation. Preliminary results demonstrate that big-data-driven user cooperation facilitates the utilization of wireless resources and reduces the secrecy loss originating from high device densities. Finally, this article identifies research topics for future studies on big-data-driven user cooperation and secure transmission in wireless networks.
Recent Advances of LTE/WiFi Coexistence in Unlicensed Spectrum IEEE Netw. (IF 7.23) Pub Date : 2017-10-27 Yan Huang; Yongce Chen; Y. Thomas Hou; Wenjing Lou; Jeffrey H. Reed
U-LTE is a new wireless technology that is currently being developed by industry and academia to offer LTE service in unlicensed spectrum. U-LTE addresses spectrum shortage from 4G LTE cellular networks by allowing them to operate in unlicensed bands. To ensure fair spectrum sharing among different wireless technologies (LTE and WiFi in particular), a number of coexistence mechanisms have been proposed. These mechanisms operate in the time, frequency, or power domains to minimize potential adverse effects from LTE. Based on these mechanisms, a number of U-LTE standards are being developed by industry. In this article, we present recent advances in this exciting area by reviewing the state-of-the-art LTE/WiFi coexistence mechanisms and show how they are incorporated into industry standards. We also point out several key challenges and open problems for future research.
QUOIN: Incentive Mechanisms for Crowd Sensing Networks IEEE Netw. (IF 7.23) Pub Date : 2018-02-07 Kaoru Ota; Mianxiong Dong; Jinsong Gui; Anfeng Liu
Crowd sensing networks play a critical role in big data generation where a large number of mobile devices collect various kinds of data with large-volume features. Although which information should be collected is essential for the success of crowd-sensing applications, few research efforts have been made so far. On the other hand, an efficient incentive mechanism is required to encourage all crowd-sensing participants, including data collectors, service providers, and service consumers, to join the networks. In this article, we propose a new incentive mechanism called QUOIN, which simultaneously ensures Quality and Usability Of INformation for crowd-sensing application requirements. We apply a Stackelberg game model to the proposed mechanism to guarantee each participant achieves a satisfactory level of profits. Performance of QUOIN is evaluated with a case study, and experimental results demonstrate that it is efficient and effective in collecting valuable information for crowd-sensing applications.
Reliable Formation Protocol for Bluetooth Hybrid Single-hop and Multi-hop Networks IEEE Netw. (IF 7.23) Pub Date : 2017-10-03 Chih-Min Yu; En-Li Lin
There are presently many applications where a non-uniform distribution of devices needs to be established for Bluetooth scatternets. Under the scenario of one dense zone and multiple sparse zones, the dense area has a high probability of generating a single-hop scenario since most devices are within radio range, whereas most devices are out of radio range under the multi-hop scenario in other sparse areas. Thus, both situations have to be considered in the formation of an algorithm design for most real-life situations. This work proposes a reliable formation protocol, called Dual-Ring Tree, for hybrid single-hop/ multi-hop instances. To benefit from the advantages of the hybrid scenarios, a dual-ring subnet is presented as a single-hop solution for dense areas, while a tree-shaped subnet is designed as a multi-hop solution for sparse areas. To the best of the authors' knowledge, this is the first time such an algorithm has been designed to deal with both single-hop and multi-hop scenarios. The computer simulation results suggest that the reliable Dual- Ring Tree outperforms conventional BlueHRT in terms of routing efficiency and network reliability for Bluetooth multi-hop networks.
Toward Secure Crowd Sensing in Vehicle-to-Everything Networks IEEE Netw. (IF 7.23) Pub Date : 2017-11-28 Kaigui Bian; Gaoxiang Zhang; Lingyang Song
V2X communication facilitates information sharing between a vehicle and the infrastructure, pedestrians, devices, or any other entity that may affect the vehicle, which is known as a critical component in 5G that promises to realize the vision of connected and autonomous vehicles. Crowd sensing, a.k.a. collective perception, is one of the essential concepts of V2X networks, where vehicles share their information collected by local perception sensors about the environment for improving safety, saving energy, optimizing traffic, and so on. Although the operational aspects of V2X networks are being studied actively, its security aspect has received little attention. In this article, we discuss security issues that may pose serious threats to crowd sensing in V2X networks, and we focus on V2X-specific threats that are unique in V2X networks, e.g. platoon disruption and perception data falsification. We also discuss countermeasures against these threats and the technical challenges th
Device-Free Wireless Sensing: Challenges, Opportunities, and Applications IEEE Netw. (IF 7.23) Pub Date : 2018-02-07 Jie Wang; Qinhua Gao; Miao Pan; Yuguang Fang
Recent developments on DFWS have shown that wireless signals can be utilized not only as a communication medium to transmit data, but also as an enabling tool for realizing non-intrusive device-free sensing. DFWS has many potential applications, for example, human detection and localization, human activity and gesture recognition, surveillance, elder or patient monitoring, emergency rescue, and so on. With the development and maturity of DFWS, we believe it will eventually empower traditional wireless networks with the augmented ability to sense the surrounding environment, and evolve wireless communication networks into intelligent sensing networks that could sense human-scale context information within the deployment area of the network. The research field of DFWS has emerged quickly recently. This article tries to provide an integrated picture of this emerging field and hopefully inspire future research. Specifically, we present the working principle and system architecture of the DFWS system, review its potential applications, and discuss research challenges and opportunities.
Going Fast and Fair: Latency Optimization for Cloud-Based Service Chains IEEE Netw. (IF 7.23) Pub Date : 2017-11-29 Yuchao Zhang; Ke Xu; Haiyang Wang; Qi Li; Tong Li; Xuan Cao
State-of-the-art microservices have been attracting more attention in recent years. A broad spectrum of online interactive applications are now programmed to service chains on the cloud, seeking better system scalability and lower operating costs. Different from the conventional batch jobs, most of these applications consist of multiple stand-alone services that communicate with each other. These step-by-step operations unavoidably introduce higher latency to the delay-sensitive chained services. In this article, we aim at designing an optimization approach for reducing the latency of chained services. Specifically, presenting the measurement and analysis of chained services on Baidu's cloud platform, our real-world trace indicates that these chained services are suffering from significantly high latency because they are mostly handled by different queues on cloud servers for multiple times. However, such a unique feature introduces significant challenges to optimize a microservice's overall queueing delay. To address this problem, we propose a delay-guaranteed approach to accelerate the overall queueing of chained services while obtaining fairness across all the workloads. Our evaluations on Baidu servers shows that the proposed design can successfully reduce the latency of chained services by 35 percent with minimal impact on other workloads.
A Machine Learning Framework for Resource Allocation Assisted by Cloud Computing IEEE Netw. (IF 7.23) Pub Date : 2018-04-02 Jun-Bo Wang; Junyuan Wang; Yongpeng Wu; Jin-Yuan Wang; Huiling Zhu; Min Lin; Jiangzhou Wang
Conventionally, resource allocation is formulated as an optimization problem and solved online with instantaneous scenario information. Since most resource allocation problems are not convex, the optimal solutions are very difficult to obtain in real time. Lagrangian relaxation or greedy methods are then often employed, which results in performance loss. Therefore, the conventional methods of resource allocation are facing great challenges to meet the ever increasing QoS requirements of users with scarce radio resource. Assisted by cloud computing, a huge amount of historical data on scenarios can be collected for extracting similarities among scenarios using machine learning. Moreover, optimal or near-optimal solutions of historical scenarios can be searched offline and stored in advance. When the measured data of a scenario arrives, the current scenario is compared with historical scenarios to find the most similar one. Then the optimal or near-optimal solution in the most similar historical scenario is adopted to allocate the radio resources for the current scenario. To facilitate the application of new design philosophy, a machine learning framework is proposed for resource allocation assisted by cloud computing. An example of beam allocation in multi-user massive MIMO systems shows that the proposed machine-learning-based resource
Energy-Efficient NOMA Enabled Heterogeneous Cloud Radio Access Networks IEEE Netw. (IF 7.23) Pub Date : 2018-03-13 Fuhui Zhou; Yongpeng Wu; Rose Qingyang Hu; Yuhao Wang; Kat Kit Wong
H-CRANs are envisioned to be promising in 5G wireless networks. H-CRANs enable users to enjoy diverse services with high energy efficiency, high spectral efficiency, and low-cost operation, which are achieved by using cloud computing and virtualization technologies. However, H-CRANs face many technical challenges due to massive user connectivity, increasingly severe spectrum scarcity, and high penetration of energy-constrained devices. These challenges may significantly degrade network performance and user quality of service if not properly tackled. NOMA schemes exploit non-orthogonal resource sharing among multiple users and have received tremendous attention due to their great potential to improve spectral and energy efficiency in 5G networks. This article focuses on the energy efficiency study in a NOMA enabled H-CRAN. Key 5G technologies that can be applied in NOMA H-CRANs to improve energy efficiency are presented. Challenges to implement these technologies and open research issues are discussed. The performance study shows that using NOMA enabled H-CRANs together with the key presented technologies can greatly improve overall system energy efficiency.
Mobile Social Big Data: WeChat Moments Dataset, Network Applications, and Opportunities IEEE Netw. (IF 7.23) Pub Date : 2018-03-14 Yuanxing Zhang; Zhuqi Li; Chengliang Gao; Kaigui Bian; Lingyang Song; Shaoling Dong; Xiaoming Li
In parallel with the increase of various mobile technologies, the MSN service has brought us into an era of mobile social big data, where people are creating new social data every second and everywhere. It is of vital importance for businesses, governments, and institutions to understand how peoples' behaviors in the online cyberspace can affect the underlying computer network, or their offline behaviors at large. To study this problem, we collect a dataset from WeChat Moments, called WeChatNet, which involves 25,133,330 WeChat users with 246,369,415 records of link reposting on their pages. We revisit three network applications based on the data analytics over WeChatNet, i.e., the information dissemination in mobile cellular networks, the network traffic prediction in backbone networks, and the mobile population distribution projection. We also discuss the potential research opportunities for developing new applications using the released dataset.
Task Assignment in Mobile Crowdsensing: Present and Future Directions IEEE Netw. (IF 7.23) Pub Date : 2018-03-13 Wei Gong; Baoxian Zhang; Cheng Li
Mobile crowdsensing has wide application perspectives and tremendous advantages over traditional sensor networks due to its low cost, extensive coverage, and high sensing accuracy properties. Task assignment is a crucial issue in mobile crowdsensing systems which is intended to achieve a good tradeoff between task quality and task cost. The design of efficient task assignment mechanisms has attracted a lot of attention and much work has been carried out. In this article, we present a comprehensive survey of state-of-the-art task assignment mechanisms in mobile crowdsensing systems. We will first introduce several fundamental issues in task assignment and classify existing mechanisms based on different design criteria. Then we introduce how each of the existing mechanisms works and discuss their merits and deficiencies. Finally, we discuss challenging issues and point out some future directions in this area.
In-Vehicle Networking: Protocols, Challenges, and Solutions IEEE Netw. (IF 7.23) Pub Date : 2018-03-13 Jun Huang; Mingli Zhao; Yide Zhou; Cong-Cong Xing
Fuel utilization efficiency and cost reduction are two major goals in designing in-vehicle networks. Aiming to address these two issues, we investigate in-vehicle networking protocols from both wired and wireless perspectives by first presenting representative solutions in each area, then identifying the challenges to current solutions, and finally advocating the use of the automotive Ethernet. Also, we propose a priority-based scheduler for the automotive Ethernet. Our preliminary experiments show that the proposed scheduler is effective and flexible, and thus is applicable to next-generation in-vehicle networks. We hope that further studies in this area can be inspired by our work and will be forthcoming in years to come.
Querying in Internet of Things with Privacy Preserving: Challenges, Solutions and Opportunities IEEE Netw. (IF 7.23) Pub Date : 2018-03-13 Hao Ren; Hongwei Li; Yuanshun Dai; Kan Yang; Xiaodong Lin
IoT is envisioned as the next stage of the information revolution, enabling various daily applications and providing better service by conducting a deep fusion with cloud and fog computing. As the key mission of most IoT applications, data management, especially the fundamental function-data query, has long been plagued by severe security and privacy problems. Most query service providers, including the big ones (e.g., Google, Facebook, Amazon, and so on) are suffering from intensive attacks launched by insiders or outsiders. As a consequence, processing various queries in IoT without compromising the data and query privacy is an urgent and challenging issue. In this article, we propose a thing-fog-cloud architecture for secure query processing based on well studied classical paradigms. Following with a description of crucial technical challenges in terms of functionality, privacy and efficiency assurance, we survey the latest milestone-like approaches, and provide an insight into the advantages and limitations of each scheme. Based on the recent advances, we also discuss future research opportunities to motivate efforts to develop practical private query protocols in IoT.
Data Security and Privacy in Fog Computing IEEE Netw. (IF 7.23) Pub Date : 2018-03-13 Yunguo Guan; Jun Shao; Guiyi Wei; Mande Xie
Cloud computing is now a popular computing paradigm that can provide end users access to configurable resources on any device, from anywhere, at any time. During the past years, cloud computing has been developed dramatically. However, with the development of the Internet of Things, the disadvantages (such as high latency) of cloud computing are gradually revealed due to the long distance between the cloud and end users. Fog computing is proposed to solve this problem by extending the cloud to the edge of the network. In particular, fog computing introduces an intermediate layer called fog that is designed to process the communication data between the cloud and end users. Hence, fog computing is usually considered as an extension of cloud computing. In this article, we discuss the design issues for data security and privacy in fog computing. Specially, we present the unique data security and privacy design challenges presented by the fog layer and highlight the reasons why the data protection techniques in cloud computing cannot be directly applied in fog computing.
Enabling Collaborative Edge Computing for Software Defined Vehicular Networks IEEE Netw. (IF 7.23) Pub Date : 2018-03-13 Kai Wang; Hao Yin; Wei Quan; Geyong Min
Edge computing has great potential to address the challenges in mobile vehicular networks by transferring partial storage and computing functions to network edges. However, it is still a challenge to efficiently utilize heterogeneous edge computing architectures and deploy large-scale IoV systems. In this article, we focus on the collaborations among different edge computing anchors and propose a novel collaborative vehicular edge computing framework, called CVEC. Specifically, CVEC can support more scalable vehicular services and applications by both horizontal and vertical collaborations. Furthermore, we discuss the architecture, principle, mechanisms, special cases, and potential technical enablers to support the CVEC. Finally, we present some research challenges as well as future research directions.
Efficient Coastal Communications with Sparse Network Coding IEEE Netw. (IF 7.23) Pub Date : 2018-03-13 Ye Li; Jue Wang; Shibing Zhang; Zhihua Bao; Jiangzhou Wang
The demand for wideband communication in the coastal area (i.e., ≤ 100 km from the coastline) has been rapidly increasing in recent years. Compared to the terrestrial scenario, the coastal environment has long-distance and highly dynamic channels, and the communication devices are more strictly constrained by energy supplies. While the RLNC has the fountain erasure-correction property and is suitable for transmissions over the long-distance dynamic channels, it suffers from high coding coefficient delivery cost and decoding complexity. In this article, we look into the application of sparse network coding in coastal communication systems. We identify two typical multicast scenarios that may appear in coastal communications, namely the relay-aided multicast and multicast from a shore-based base station with D2D communication enabled among the subscribers. We provide a detailed comparison of existing sparse network coding schemes. Based on that, we demonstrate through simulations that an appropriate choice of sparse codes is critical to meet the unique requirements in coastal communication systems. We show that batched sparse code is suitable for relay-aided multicast, and subset-based sparse codes are preferable for D2D-enabled multicast.
Edge computing for the internet of things IEEE Netw. (IF 7.23) Pub Date : 2018-01-26 Ju Ren; Yi Pan; Andrzej Goscinski; Raheem A. Beyah
By moving data computation and service supply from the cloud to the edge, edge computing has become a promising solution to address the limitations of cloud computing in supporting delay-sensitive and context-aware services in the Internet of Things (IoT) era. Instead of performing data storage and computing in a cluster of clouds, edge computing emphasizes leveraging the power of local computing and using different types of nearby devices/architectures as edge servers to provide timely and intelligent services. In this way, it can bring many advantages, including highly improved scalability by timely and intelligent service supply and local distributed computing that makes full use of client computing capabilities to meet the requirements of contextual computing. However, to truly realize edge computing in IoT applications, there are still many challenges that need to be addressed, such as how to efficiently distribute and manage data storage and computing, how to make edge computing collaborate with cloud computing for more scalable services, as well as how to secure and preserve the privacy of the whole system. The purpose of this Special Issue is to investigate the current research trends, and to help stakeholders in industry and academia to better understand challenges, recent advances, and potential research directions in the developing field of edge computing. Through an open call for papers and rigorous peer review, we selected 20 articles from 63 submissions as representatives of ongoing research and development activities. These 20 articles not only encompass a wide range of research topics in edge computing, but also bring some prominent research outcomes in transparent computing. We briefly divide the accepted articles into categories and discuss them briefly.
Security Threats in the Data Plane of Software-Defined Networks IEEE Netw. (IF 7.23) Pub Date : 2018-02-07 Shang Gao; Zecheng Li; Bin Xiao; Guiyi Wei
SDN has enabled extensive network programmability and speedy network innovations by decoupling the control plane from the data plane. However, the separation of the two planes could also be a potential threat to the whole network. Previous approaches pointed out that attackers can launch various attacks from the data plane against SDN, such as DoS attacks, topology poisoning attacks, and side-channel attacks. To address the security issues, we present a comprehensive study of data plane attacks in SDN, and propose FlowKeeper, a common framework to build a robust data plane against different attacks. FlowKeeper enforces port control of the data plane and reduces the workload of the control plane by filtering out illegal packets. Experimental results show that FlowKeeper could be used to efficiently mitigate different kinds of attacks (i.e., DoS and topology poisoning attacks).
Reliable and Opportunistic Transmissions for Underwater Acoustic Networks IEEE Netw. (IF 7.23) Pub Date : 2018-02-07 Weiqi Chen; Hua Yu; Quansheng Guan; Fei Ji; Fangjiong Chen
Acoustic waves propagate slowly in water, and time-varying UACs result in inevitably high bit error rate and packet loss rate. The long propagation delay and the error-prone nature of UACs impose challenges on reliable transmissions in UANs. In this article, we identify the challenges for reliable acoustic transmissions and propose a CL-FEC scheme, which achieves opportunistic transmissions to overcome the frequent transmission failures in UACs. CL-FEC adopts fountain codes as a packet-level FEC and adopts channel codes as a bit-level FEC, to realize reliable transmissions over UACs without per-packet feedback. To further improve the throughput of CL-FEC, we formulate the transmissions over UACs into a stochastic throughput optimization problem. A discrete stochastic approximation based algorithm is then developed to achieve the optimal CL-FEC by online exploiting channel estimating and algorithm iterations. Simulation results show the asymptotic convergence and the iterative optimality of the algorithm.
A Robust Dynamic Edge Network Architecture for the Internet of Things IEEE Netw. (IF 7.23) Pub Date : 2018-01-26 Beatriz Lorenzo; Juan Garcia-Rois; Xuanheng Li; Javier Gonzalez-Castano; Yuguang Fang
A massive number of devices are expected to fulfill the missions of sensing, processing and control in cyber-physical IoT systems with new applications and connectivity requirements. In this context, scarce spectrum resources must accommodate high traffic volume with stringent requirements of low latency, high reliability, and energy efficiency. Conventional centralized network architectures may not be able to fulfill these requirements due to congestion in backhaul links. This article presents a novel design of an RDNA for IoT that leverages the latest advances of mobile devices (e.g., their capability to act as access points, storing and computing capabilities) to dynamically harvest unused resources and mitigate network congestion. However, traffic dynamics may compromise the availability of terminal access points and channels, and thus network connectivity. The proposed design embraces solutions at the physical, access, networking, application, and business layers to improve network robustness. The high density of mobile devices provides alternatives for close connectivity, reducing interference and latency, and thus increasing reliability and energy efficiency. Moreover, the computing capabilities of mobile devices project smartness onto the edge, which is desirable for autonomous and intelligent decision making. A case study is included to illustrate the performance of RDNA. Potential applications of this architecture in the context of IoT are outlined. Finally, some challenges for future research are presented.
Multiagent-Based Flexible Edge Computing Architecture for IoT IEEE Netw. (IF 7.23) Pub Date : 2018-01-26 Takuo Suganuma; Takuma Oide; Shinji Kitagami; Kenji Sugawara; Norio Shiratori
This article presents a proposal for FLEC architecture, which solves problems resulting from the rigidity of the traditional IoT architecture and edge computing. FLEC architecture is a flexible and advanced IoT system model characterized by environment adaptation ability and user orientation ability. We utilize COSAP, a system configuration platform based on a multiagent framework, as an implementation procedure for FLEC architecture. Furthermore, this article presents its application case study of a healthcare support system for a sports event with many participants. Finally, we demonstrate the contribution of this proposed architecture to problem solution in edge computing.
Edge Computing Gateway of the Industrial Internet of Things Using Multiple Collaborative Microcontrollers IEEE Netw. (IF 7.23) Pub Date : 2018-01-26 Ching-Han Chen; Ming-Yi Lin; Chung-Chi Liu
An Internet of Things gateway serves as a key intermediary between numerous smart things and their corresponding cloud networking servers. A typical conventional gateway system uses a high-level embedded microcontroller (MCU) as its core; that MCU performs low-level perception-layer device network management, upper-level cloud server functions, and remote mobile computation services. However, in edge computing, many factors need to be considered when designing an IoT gateway, such as minimizing the response time, the power consumption, and the bandwidth cost. Regarding system scalability, computational efficiency, and communication efficiency, solutions that use a single MCU cannot deliver IoT functionality such as big data collection, management, real-time communication, expandable peripherals, and various other services. Therefore, this article proposes an innovative multi-MCU system framework combining a field-programmable- gate-array-based hardware bridge and multiple scalable MCUs to realize an edge gateway of a smart sensor fieldbus network. Through distributed and collaborative computing, the multi-MCU edge gateway can efficiently perform fieldbus network management, embedded data collection, and networking communication, thereby considerably reducing the real-time power consumption and improving scalability compared to the existing industrial IoT solutions.
KID Model-Driven Things-Edge-Cloud Computing Paradigm for Traffic Data as a Service IEEE Netw. (IF 7.23) Pub Date : 2018-01-26 Bowen Du; Runhe Huang; Zhipu Xie; Jianhua Ma; Weifeng Lv
The development of intelligent traffic systems can benefit from the pervasiveness of IoT technologies. In recent years, increasing numbers of devices are connected to the IoT, and new kinds of heterogeneous data sources have been generated. This leads to traffic systems that exist in extended dimensions of data space. Although cloud computing can provide essential services that reduce the computational load on IoT devices, it has its limitations: high network bandwidth consumption, high latency, and high privacy risks. To alleviate these problems, edge computing has emerged to reduce the computational load for achieving TDaaS in a dynamic way. However, how to drive all edge servers' work and meet data service requirements is still a key issue. To address this challenge, this article proposes a novel three-level transparency-of-traffic-data service framework, that is, a KID-driven TEC computing paradigm. Its aim is to enable edge servers to cooperatively work with a cloud server. A case study is presented to demonstrate the feasibility of the proposed new computing paradigm with associated mechanisms. The performance of the proposed system is also compared to other methods.
A Cost-Efficient Cloud Gaming System at Scale IEEE Netw. (IF 7.23) Pub Date : 2018-01-26 Yiling Xu; Qiu Shen; Xin Li; Zhan Ma
This article proposes a transparent gaming (TG) cloud system that allows users to play any popular high-end desktop game on the fly over the Internet. Toward this goal, we have introduced the TG-SHARE technology to share the underlying hardware capabilities, particularly for the GPU and the dedicated compression acceleration unit (XCODER). TG-SHARE utilizes offthe- shelf consumer GPUs without resorting to expensive proprietary GPU virtualization technology (e.g., GRID from NVIDIA). XCODER adapts the compression based on the network dynamics, learned gaming behaviors, and hardware resources to significantly reduce bandwidth consumption. Google's webRTC protocol is integrated to offer real-time interaction and ubiquitous access from heterogeneous devices. Compared to the existing cloud gaming vendor using the GRID technology, our TG-SHARE not only reduces the expense per user (i.e., 75 percent hardware cost reduction, 20-40 percent network cost reduction), but also improves the quality of experience with higher rate of frames per second (i.e., 2 x FPS).
Multi-User Computation Offloading in Mobile Edge Computing: A Behavioral Perspective IEEE Netw. (IF 7.23) Pub Date : 2018-01-26 Ling Tang; Shibo He
By providing cloud computing capabilities at the network edge in proximity of mobile device users, mobile edge computing offers an effective solution to help mobile devices with computation- intensive and delay-sensitive tasks. In this article, we investigate the multi-user computation offloading problem in an uncertain wireless environment. Most of the existing works assume that mobile device users are rational and make offloading decisions to maximize their expected objective utilities. However, in practice, users tend to have subjective perceptions under uncertainty, such that their behavior deviates considerably from the conventional rationality assumption. Drawing on the framework of prospect theory (PT), we formulate users' decision making of whether to offload or not as a PT-based non-cooperative game. We propose a distributed computation offloading algorithm to achieve the Nash equilibrium of the game. Numerical results assess the impact of mobile device users' behavioral biases on offloading decision making.
Selective Offloading in Mobile Edge Computing for the Green Internet of Things IEEE Netw. (IF 7.23) Pub Date : 2018-01-26 Xinchen Lyu; Hui Tian; Li Jiang; Alexey Vinel; Sabita Maharjan; Stein Gjessing; Yan Zhang
Mobile edge computing provides the radio access networks with cloud computing capabilities to fulfill the requirements of the Internet of Things services such as high reliability and low latency. Offloading services to edge servers can alleviate the storage and computing limitations and prolong the lifetimes of the IoT devices. However, offloading in MEC faces scalability problems due to the massive number of IoT devices. In this article, we present a new integration architecture of the cloud, MEC, and IoT, and propose a lightweight request and admission framework to resolve the scalability problem. Without coordination among devices, the proposed framework can be operated at the IoT devices and computing servers separately, by encapsulating latency requirements in offloading requests. Then a selective offloading scheme is designed to minimize the energy consumption of devices, where the signaling overhead can be further reduced by enabling the devices to be self-nominated or self-denied for offloading. Simulation results show that our proposed selective offloading scheme can satisfy the latency requirements of different services and reduce the energy consumption of IoT devices.
ThriftyEdge: Resource-Efficient Edge Computing for Intelligent IoT Applications IEEE Netw. (IF 7.23) Pub Date : 2018-01-26 Xu Chen; Qian Shi; Lei Yang; Jie Xu
In this article we propose a new paradigm of resource-efficient edge computing for the emerging intelligent IoT applications such as flying ad hoc networks for precision agriculture, e-health, and smart homes. We devise a resource-efficient edge computing scheme such that an intelligent IoT device user can well support its computationally intensive task by proper task offloading across the local device, nearby helper device, and the edge cloud in proximity. Different from existing studies for mobile computation offloading, we explore the novel perspective of resource efficiency and devise an efficient computation offloading mechanism consisting of a delay-aware task graph partition algorithm and an optimal virtual machine selection method in order to minimize an intelligent IoT device's edge resource occupancy and meanwhile satisfy its QoS requirement. Performance evaluation corroborates the effectiveness and superior performance of the proposed resource-efficient edge computing scheme.
Collaborative Mobile Edge Computation Offloading for IoT over Fiber-Wireless Networks IEEE Netw. (IF 7.23) Pub Date : 2018-01-26 Hongzhi Guo; Jiajia Liu; Huiling Qin
Mobile edge computing is envisioned to be a promising paradigm to address the conflict between computationally intensive IoT applications and resource-constrained lightweight mobile devices. However, most existing research on mobile edge computation offloading has only taken the resource allocation between the mobile devices and the MEC servers into consideration, ignoring the huge computation resources in the centralized cloud computing center. To make full use of the centralized cloud and distributed MEC resources, designing a collaborative computation offloading mechanism becomes particularly important. Note that current MEC hosted networks, which mostly adopt the networking technology integrating cellular and core networks, face new challenges of single networking mode, long latency, poor reliability, high congestion, and high energy consumption. Hybrid fiber-wireless networks integrating both low-latency fiber optic and flexible wireless technologies should be a promising solution. Toward this end, we provide in this article a generic fiber-wireless architecture with coexistence of centralized cloud and distributed MEC for IoT connectivity. The problem of cloud-MEC collaborative computation offloading is defined, and a game-theoretic collaborative computation offloading scheme is proposed as our solution. Numerical results corroborate that our proposed scheme can achieve high energy efficiency and scales well as the number of mobile devices increases.
Joint Admission Control and Resource Allocation in Edge Computing for Internet of Things IEEE Netw. (IF 7.23) Pub Date : 2018-01-26 Shichao Li; Ning Zhang; Siyu Lin; Linghe Kong; Ajay Katangur; Muhammad Khurram Khan; Minming Ni; Gang Zhu
The IoT is a novel platform for making objects more intelligent by connecting to the Internet. However, mass connections, big data processing, and huge power consumption restrict the development of IoT. In order to address these challenges, this article proposes a novel ECIoT architecture. To further enhance the system performance, radio resource and computational resource management in ECIoT are also investigated. According to the characteristics of the ECIoT, we mainly focus on admission control, computational resource allocation, and power control. To improve the performance of ECIoT, cross-layer dynamic stochastic network optimization is studied to maximize the system utility, based on the Lyapunov stochastic optimization approach. Evaluation results are provided which demonstrate that the proposed resource allocation scheme can improve throughput, reduce end-to-end delay, and also achieve an average throughput and delay trade-off. Finally, the future research topics of resource management in ECIoT are discussed.
Toward Efficient Content Delivery for Automated Driving Services: An Edge Computing Solution IEEE Netw. (IF 7.23) Pub Date : 2018-01-26 Quan Yuan; Haibo Zhou; Jinglin Li; Zhihan Liu; Fangchun Yang; Xuemin Sherman Shen
Automated driving is coming with enormous potential for safer, more convenient, and more efficient transportation systems. Besides onboard sensing, autonomous vehicles can also access various cloud services such as high definition maps and dynamic path planning through cellular networks to precisely understand the real-time driving environments. However, these automated driving services, which have large content volume, are time-varying, location-dependent, and delay-constrained. Therefore, cellular networks will face the challenge of meeting this extreme performance demand. To cope with the challenge, by leveraging the emerging mobile edge computing technique, in this article, we first propose a two-level edge computing architecture for automated driving services in order to make full use of the intelligence at the wireless edge (i.e., base stations and autonomous vehicles) for coordinated content delivery. We then investigate the research challenges of wireless edge caching and vehicular content sharing. Finally, we propose potential solutions to these challenges and evaluate them using real and synthetic traces. Simulation results demonstrate that the proposed solutions can significantly reduce the backhaul and wireless bottlenecks of cellular networks while ensuring the quality of automated driving services.
A Tensor-Based Holistic Edge Computing Optimization Framework for Internet of Things IEEE Netw. (IF 7.23) Pub Date : 2018-01-26 Huazhong Liu; Laurence T. Yang; Man Lin; Dexiang Yin; Yimu Guo
Balancing the costs of different objectives in EC requires comprehensive and global analysis. This article investigates the holistic EC optimization problem for IoT. First, a triple-plane EC architecture for IoT is proposed including the edge device plane, edge server plane, and cloud plane, respectively, which is conducive to collaboratively accomplishing the EC applications. Then five tensor-based representation models are constructed to represent the complex relationships and resolve the heterogeneity of different devices. Afterward, we construct a generalized and holistic EC optimization model based on the constructed tensors including energy consumption, execution time, system reliability, and quality of experience. Finally, a customized optimization framework is proposed in which the optimization objectives can be arbitrarily combined according to practical applications. A case study is conducted to evaluate the performance of the proposed scheme; results demonstrate that it significantly outperforms the state-of-the-art cloud-assisted mobile computing scheme and holistic mobile cloud computing scheme.
Learning IoT in Edge: Deep Learning for the Internet of Things with Edge Computing IEEE Netw. (IF 7.23) Pub Date : 2018-01-26 He Li; Kaoru Ota; Mianxiong Dong
Deep learning is a promising approach for extracting accurate information from raw sensor data from IoT devices deployed in complex environments. Because of its multilayer structure, deep learning is also appropriate for the edge computing environment. Therefore, in this article, we first introduce deep learning for IoTs into the edge computing environment. Since existing edge nodes have limited processing capability, we also design a novel offloading strategy to optimize the performance of IoT deep learning applications with edge computing. In the performance evaluation, we test the performance of executing multiple deep learning tasks in an edge computing environment with our strategy. The evaluation results show that our method outperforms other optimization solutions on deep learning for IoT.
Consolidate IoT Edge Computing with Lightweight Virtualization IEEE Netw. (IF 7.23) Pub Date : 2018-01-26 Roberto Morabito; Vittorio Cozzolino; Aaron Yi Ding; Nicklas Beijar; Jorg Ott
Lightweight virtualization (LV) technologies have refashioned the world of software development by introducing flexibility and new ways of managing and distributing software. Edge computing complements today's powerful centralized data centers with a large number of distributed nodes that provide virtualization close to the data source and end users. This emerging paradigm offers ubiquitous processing capabilities on a wide range of heterogeneous hardware characterized by different processing power and energy availability. The scope of this article is to present an in-depth analysis on the requirements of edge computing from the perspective of three selected use cases that are particularly interesting for harnessing the power of the Internet of Things. We discuss and compare the applicability of two LV technologies, containers and unikernels, as platforms for enabling the scalability, security, and manageability required by such pervasive applications that soon may be part of our everyday lives. To inspire further research, we identify open problems and highlight future directions to serve as a road map for both industry and academia.
Hyperconnected Network: A Decentralized Trusted Computing and Networking Paradigm IEEE Netw. (IF 7.23) Pub Date : 2018-01-26 Hao Yin; Dongchao Guo; Kai Wang; Zexun Jiang; Yongqiang Lyu; Ju Xing
With the development of the Internet of Things, a complex CPS system has emerged and is becoming a promising information infrastructure. In the CPS system, the loss of control over user data has become a very serious challenge, making it difficult to protect privacy, boost innovation, and guarantee data sovereignty. In this article, we propose HyperNet, a novel decentralized trusted computing and networking paradigm, to meet the challenge of loss of control over data. HyperNet is composed of the intelligent PDC, which is considered as the digital clone of a human individual; the decentralized trusted connection between any entities based on blockchain as well as smart contract; and the UDI platform, enabling secure digital object management and an identifier-driven routing mechanism. HyperNet has the capability of protecting data sovereignty, and has the potential to transform the current communication-based information system to the future data-oriented information society.
MECPASS: Distributed Denial of Service Defense Architecture for Mobile Networks IEEE Netw. (IF 7.23) Pub Date : 2018-01-26 Van Linh Nguyen; Po-Ching Lin; Ren-Hung Hwang
Distributed denial of service is one of the most critical threats to the availability of Internet services. A botnet with only 0.01 percent of the 50 billion connected devices in the Internet of Things is sufficient to launch a massive DDoS flooding attack that could exhaust resources and interrupt any target. However, the mobility of user equipment and the distinctive characteristics of traffic behavior in mobile networks also limit the detection capabilities of traditional anti-DDoS techniques. In this article, we present a novel collaborative DDoS defense architecture called MECPASS to mitigate the attack traffic from mobile devices. Our design involves two filtering hierarchies. First, filters at edge computing servers (i.e., local nodes) seek to prevent spoofing attacks and anomalous traffic near sources as much as possible. Second, global analyzers located at cloud servers (i.e., central nodes) classify the traffic of the entire monitored network and unveil suspicious behaviors by periodically aggregating data from the local nodes. We have explored the effectiveness of our system on various types of application- layer DDoS attacks in the context of web servers. The simulation results show that MECPASS can effectively defend and clean an Internet service provider core network from the junk traffic of compromised UEs, while maintaining the false-positive rate of its detection engine at less than 1 percent.
Block-Stream as a Service: A More Secure, Nimble, and Dynamically Balanced Cloud Service Model for Ambient Computing IEEE Netw. (IF 7.23) Pub Date : 2018-01-26 Jackson He; Yaoxue Zhang; Ju Lu; Ming Wu; Fujin Huang
Cloud computing has become mainstream in the last few years. Diverse services based on IaaS, PaaS, SaaS, and app store models have been widely available to millions of users worldwide. At the same time, transparent computing (TC) has also gained strong interest in China. With the rapid development of IoT, increasing IoT devices will be deployed to provide information services for end users. As we are heading into the era of ambient computing, where end users are immersed in seamless computing devices and services, the boundary between cloud and devices is getting blurry, and more devices and services need to be securely managed. The existing service models that are defined for user-cloud interaction should be extended to serve more diverse and lightweight devices with nimble and fluid services. With this evolution trend, it is paramount for both cloud service providers and IoT service operators to manage the security and integrity of these services. In this article, we propose a new cloud service model, named block-stream as a service (BaaS), based on our previous study on TC. BaaS is nimbler than SaaS and has better security management than an app store. It is expected that this new cloud service model has great potential to support the vision of ambient computing and securely manage diverse applications on lightweight IoT devices.
COAST: A Cooperative Storage Framework for Mobile Transparent Computing Using Device-to-Device Data Sharing IEEE Netw. (IF 7.23) Pub Date : 2018-01-26 Jiahui Jin; Junzhou Luo; Yunhao Li; Runqun Xiong
TC is a promising network computing paradigm that offers an efficient way to make lightweight terminals more powerful, convenient, and secure. TC's execution model separates data storage and application execution, letting terminals load applications from TC servers on demand via the Internet. With this approach, the network's performance significantly affects the TC applications' performance. To enhance TC applications' performance, existing research typically deploys many cache servers on the Internet. However, such caching techniques are not ideal in a mobile environment, where the wireless networks that mobile terminals use for Internet access are expensive and have limited bandwidth. To address this problem, we propose COAST, a cooperative storage framework for MTC. Based on a deviceto- device data-sharing technique, COAST enables a mobile terminal to fetch applications from nearby terminals without accessing the Internet. In this article, we introduce COAST's design, explore the opportunities and challenges of cooperative storage in MTC environments, and identify future research directions.
A Multi-Level Cache Framework for Remote Resource Access in Transparent Computing IEEE Netw. (IF 7.23) Pub Date : 2018-01-26 Di Zhang; Yuezhi Zhou; Yaoxue Zhang
With the increasing demand for high performance of remote resource access in transparent computing, there is a requirement to design a multi-level cache framework to alleviate the network latency. Existing cache frameworks in CPU and web systems cannot be applied simply because the remote resource access architecture needs to be extended to support multi-level cache, and the ways that resources are accessed in transparent computing require specific designs. In this article, we propose a multi-level cache framework for remote resource access in transparent computing. Based on the low latency feature of edge computing, we extend the remote resource access architecture to an architecture with multi-level caches by setting caches on the edge devices with low network latency. Then we design a hybrid multi-level cache hierarchy and make corresponding cache policies. Through a case study, we show the effectiveness of our design. Finally, we discuss several future research issues for deploying the proposed multi-level cache framework.
Transparent Learning: An Incremental Machine Learning Framework Based on Transparent Computing IEEE Netw. (IF 7.23) Pub Date : 2018-01-26 Kehua Guo; Zhonghe Liang; Ronghua Shi; Chao Hu; Zuoyong Li
In the Internet of Things environment, the capabilities of various clients are being developed in the direction of networking and intellectualization. How to develop the clients' capability from that of only collecting and displaying data to that of possessing intelligence has been a critical issue. In recent years, machine learning has become a representative technology in client intellectualization and is now attracting growing interest. In machine learning, massive computing, including data preprocessing and training, requires substantial computing resources; however, lightweight clients usually do not have strong computing capability. To solve this problem, we introduce the advantage of transparent computing (TC) for the client intellectualization framework and propose an incremental machine learning framework named transparent learning (TL), where training tasks are moved from lightweight clients to servers and edge devices. After training, test models are transmitted to clients and updated with incremental training. In this study, a cache strategy is designed to divide the training set in order to optimize the performance. We choose deep learning as the performance evaluation case, and conduct several TensorFlow-based experiments to demonstrate the efficiency of the framework.
Some contents have been Reproduced by permission of The Royal Society of Chemistry.
- Acc. Chem. Res.
- ACS Appl. Mater. Interfaces
- ACS Biomater. Sci. Eng.
- ACS Catal.
- ACS Cent. Sci.
- ACS Chem. Biol.
- ACS Chem. Neurosci.
- ACS Comb. Sci.
- ACS Earth Space Chem.
- ACS Energy Lett.
- ACS Infect. Dis.
- ACS Macro Lett.
- ACS Med. Chem. Lett.
- ACS Nano
- ACS Omega
- ACS Photonics
- ACS Sens.
- ACS Sustainable Chem. Eng.
- ACS Synth. Biol.
- Acta Biomater.
- Acta Crystallogr. A Found. Adv.
- Acta Mater.
- Adv. Colloid Interface Sci.
- Adv. Electron. Mater.
- Adv. Energy Mater.
- Adv. Funct. Mater.
- Adv. Healthcare Mater.
- Adv. Mater.
- Adv. Mater. Interfaces
- Adv. Opt. Mater.
- Adv. Sci.
- Adv. Synth. Catal.
- AlChE J.
- Anal. Bioanal. Chem.
- Anal. Chem.
- Anal. Chim. Acta
- Anal. Methods
- Angew. Chem. Int. Ed.
- Annu. Rev. Anal. Chem.
- Annu. Rev. Biochem.
- Annu. Rev. Environ. Resour.
- Annu. Rev. Food Sci. Technol.
- Annu. Rev. Mater. Res.
- Annu. Rev. Phys. Chem.
- Appl. Catal. A Gen.
- Appl. Catal. B Environ.
- Appl. Clay. Sci.
- Appl. Energy
- Aquat. Toxicol.
- Arab. J. Chem.
- Asian J. Org. Chem.
- Atmos. Environ.
- Carbohydr. Polym.
- Catal. Commun.
- Catal. Rev. Sci. Eng.
- Catal. Sci. Technol.
- Catal. Today
- Cell Chem. Bio.
- Cem. Concr. Res.
- Ceram. Int.
- Chem. Asian J.
- Chem. Bio. Drug Des.
- Chem. Biol. Interact.
- Chem. Commun.
- Chem. Educ. Res. Pract.
- Chem. Eng. J.
- Chem. Eng. Sci.
- Chem. Eur. J.
- Chem. Mater.
- Chem. Phys.
- Chem. Phys. Lett.
- Chem. Phys. Lipids
- Chem. Rev.
- Chem. Sci.
- Chem. Soc. Rev.
- Chin. J. Chem.
- Combust. Flame
- Compos. Part A Appl. Sci. Manuf.
- Compos. Sci. Technol.
- Compr. Rev. Food Sci. Food Saf.
- Comput. Chem. Eng.
- Constr. Build. Mater.
- Coordin. Chem. Rev.
- Corros. Sci.
- Crit. Rev. Food Sci. Nutr.
- Crit. Rev. Solid State Mater. Sci.
- Cryst. Growth Des.
- Curr. Opin. Chem. Eng.
- Curr. Opin. Colloid Interface Sci.
- Curr. Opin. Environ. Sustain
- Curr. Opin. Solid State Mater. Sci.
- Ecotox. Environ. Safe.
- Electrochem. Commun.
- Electrochim. Acta
- Energy Environ. Sci.
- Energy Fuels
- Energy Storage Mater.
- Environ. Impact Assess. Rev.
- Environ. Int.
- Environ. Model. Softw.
- Environ. Pollut.
- Environ. Res.
- Environ. Sci. Policy
- Environ. Sci. Technol.
- Environ. Sci. Technol. Lett.
- Environ. Sci.: Nano
- Environ. Sci.: Processes Impacts
- Environ. Sci.: Water Res. Technol.
- Eur. J. Inorg. Chem.
- Eur. J. Med. Chem.
- Eur. J. Org. Chem.
- Eur. Polym. J.
- J. Acad. Nutr. Diet.
- J. Agric. Food Chem.
- J. Alloys Compd.
- J. Am. Ceram. Soc.
- J. Am. Chem. Soc.
- J. Am. Soc. Mass Spectrom.
- J. Anal. Appl. Pyrol.
- J. Anal. At. Spectrom.
- J. Antibiot.
- J. Catal.
- J. Chem. Educ.
- J. Chem. Eng. Data
- J. Chem. Inf. Model.
- J. Chem. Phys.
- J. Chem. Theory Comput.
- J. Chromatogr. A
- J. Chromatogr. B
- J. Clean. Prod.
- J. CO2 UTIL.
- J. Colloid Interface Sci.
- J. Comput. Chem.
- J. Cryst. Growth
- J. Dairy Sci.
- J. Electroanal. Chem.
- J. Electrochem. Soc.
- J. Environ. Manage.
- J. Eur. Ceram. Soc.
- J. Fluorine Chem.
- J. Food Drug Anal.
- J. Food Eng.
- J. Food Sci.
- J. Funct. Foods
- J. Hazard. Mater.
- J. Heterocycl. Chem.
- J. Hydrol.
- J. Ind. Eng. Chem.
- J. Inorg. Biochem.
- J. Magn. Magn. Mater.
- J. Mater. Chem. A
- J. Mater. Chem. B
- J. Mater. Chem. C
- J. Mater. Process. Tech.
- J. Mech. Behav. Biomed. Mater.
- J. Med. Chem.
- J. Membr. Sci.
- J. Mol. Catal. A Chem.
- J. Mol. Liq.
- J. Nat. Gas Sci. Eng.
- J. Nat. Prod.
- J. Nucl. Mater.
- J. Org. Chem.
- J. Photochem. Photobiol. C Photochem. Rev.
- J. Phys. Chem. A
- J. Phys. Chem. B
- J. Phys. Chem. C
- J. Phys. Chem. Lett.
- J. Polym. Sci. A Polym. Chem.
- J. Porphyr. Phthalocyanines
- J. Power Sources
- J. Solid State Chem.
- J. Taiwan Inst. Chem. E.
- Macromol. Rapid Commun.
- Mass Spectrom. Rev.
- Mater. Chem. Front.
- Mater. Des.
- Mater. Horiz.
- Mater. Lett.
- Mater. Sci. Eng. A
- Mater. Sci. Eng. R Rep.
- Mater. Today
- Meat Sci.
- Med. Chem. Commun.
- Microchem. J.
- Microchim. Acta
- Micropor. Mesopor. Mater.
- Mol. Biosyst.
- Mol. Cancer Ther.
- Mol. Catal.
- Mol. Nutr. Food Res.
- Mol. Pharmaceutics
- Mol. Syst. Des. Eng.
- Nano Energy
- Nano Lett.
- Nano Res.
- Nano Today
- Nano-Micro Lett.
- Nanomed. Nanotech. Biol. Med.
- Nanoscale Horiz.
- Nat. Catal.
- Nat. Chem.
- Nat. Chem. Biol.
- Nat. Commun.
- Nat. Energy
- Nat. Mater.
- Nat. Med.
- Nat. Methods
- Nat. Nanotech.
- Nat. Photon.
- Nat. Prod. Rep.
- Nat. Protoc.
- Nat. Rev. Chem.
- Nat. Rev. Drug. Disc.
- Nat. Rev. Mater.
- Natl. Sci. Rev.
- Neurochem. Int.
- New J. Chem.
- NPG Asia Mater.
- npj 2D Mater. Appl.
- npj Comput. Mater.
- npj Flex. Electron.
- npj Mater. Degrad.
- npj Sci. Food
- Pharmacol. Rev.
- Pharmacol. Therapeut.
- Photochem. Photobiol. Sci.
- Phys. Chem. Chem. Phys.
- Phys. Life Rev.
- PLOS ONE
- Polym. Chem.
- Polym. Degrad. Stabil.
- Polym. J.
- Polym. Rev.
- Powder Technol.
- Proc. Combust. Inst.
- Prog. Cryst. Growth Ch. Mater.
- Prog. Energy Combust. Sci.
- Prog. Mater. Sci.
- Prog. Photovoltaics
- Prog. Polym. Sci.
- Prog. Solid State Chem.
- Sci. Adv.
- Sci. Bull.
- Sci. Rep.
- Sci. Total Environ.
- Sci. Transl. Med.
- Scr. Mater.
- Sens Actuators B Chem.
- Sep. Purif. Technol.
- Small Methods
- Soft Matter
- Sol. Energy
- Sol. Energy Mater. Sol. Cells
- Solar RRL
- Spectrochim. Acta. A Mol. Biomol. Spectrosc.
- Surf. Sci. Rep.
- Sustainable Energy Fuels