• arXiv.cs.NI Pub Date : 2020-01-18
Sohini Roy; Harish Chandrasekaran; Anamitra Pal; Arunabha Sen

The reliable and resilient operation of the smart grid necessitates a clear understanding of the intra-and-inter dependencies of its power and communication systems. This understanding can only be achieved by accurately depicting the interactions between the different components of these two systems. This paper presents a model, called modified implicative interdependency model (MIIM), for capturing these interactions. Data obtained from a power utility in the U.S. Southwest is used to ensure the validity of the model. The performance of the model for a specific power system application namely, state estimation, is demonstrated using the IEEE 118-bus system. The results indicate that the proposed model is more accurate than its predecessor, the implicative interdependency model (IIM) [1], in predicting the system state in case of failures in the power and/or communication systems.

更新日期：2020-01-22
• arXiv.cs.NI Pub Date : 2020-01-19
Mounir Bensalem; Jasenka Dizdarević; Admela Jukan

With the edge computing becoming an increasingly adopted concept in system architectures, it is expected its utilization will be additionally heightened when combined with deep learning (DL) techniques. The idea behind integrating demanding processing algorithms in Internet of Things (IoT) and edge devices, such as Deep Neural Network (DNN), has in large measure benefited from the development of edge computing hardware, as well as from adapting the algorithms for use in resource constrained IoT devices. Surprisingly, there are no models yet to optimally place and use machine learning in edge computing. In this paper, we propose the first model of optimal placement of Deep Neural Network (DNN) Placement and Inference in edge computing. We present a mathematical formulation to the DNN Model Variant Selection and Placement (MVSP) problem considering the inference latency of different model-variants, communication latency between nodes, and utilization cost of edge computing nodes. We evaluate our model numerically, and show that for low load increasing model co-location decreases the average latency by 33% of millisecond-scale per request, and for high load, by 21%.

更新日期：2020-01-22
• arXiv.cs.NI Pub Date : 2020-01-20
Anubhab Banerjee; Stephen S. Mwanje; Georg Carle

Cognitive Autonomous Networks (CAN) are promoted to advance Self Organizing Network (SON), replacing rule-based SON Functions (SFs) with Cognitive Functions (CFs), which learn optimal behavior by interacting with the network. As in SON, CFs do encounter conflicts due to overlap in parameters or objectives. However, owing to the non-deterministic behavior of CFs, these conflicts cannot be resolved using rulebased methods and new solutions are required. This paper investigates the CF deployments with and without a coordination mechanism, and proves both heuristically and mathematically that a coordination mechanism is required. Using a two-CF Multi-Agent-System model with the possible types of conflicts, we show that the challenge is a typical bargaining problem, for which the optimal response is the Nash bargaining Solution (NBS). We use NBS to propose a coordination mechanism design that is capable of resolving the conflicts and show via simulations how implementation of the proposed solution is feasible in real life scenario.

更新日期：2020-01-22
• arXiv.cs.NI Pub Date : 2020-01-20
Tobias Meuser; Oluwasegun Taiwo Ojo; Daniel Bischoff; Antonio Fernández Anta; Ioannis Stavrakakis; Ralf Steinmetz

To support location-based services, vehicles must share their location with a server to receive relevant data, compromising their (location) privacy. To alleviate this privacy compromise, the vehicle's location can be obfuscated by adding artificial noise. Under limited available bandwidth, and since the area including the vehicle's location increases with the noise, the server will provide fewer data relevant to the vehicle's true location, reducing the effectiveness of a location-based service. To alleviate this problem, we propose that data relevant to a vehicle is also provided through direct, ad hoc communication by neighboring vehicles. Through such Vehicle-to-Vehicle (V2V) cooperation, the impact of location obfuscation is mitigated. Since vehicles subscribe to data of (location-dependent) impact values, neighboring vehicles will subscribe to largely overlapping sets of data, reducing the benefit of V2V cooperation. To increase such benefit, we develop and study a non-cooperative game determining the data that a vehicle should subscribe to, aiming at maximizing its utilization while considering the participating (neighboring) vehicles. Our analysis and results show that the proposed V2V cooperation and derived strategy lead to significant performance increase compared to non-cooperative approaches and largely alleviates the impact of privacy on location-based services.

更新日期：2020-01-22
• arXiv.cs.NI Pub Date : 2020-01-20
Ajmal Khan; Adnan Munir; Zeeshan Kaleem; Farman Ullah; Muhammad Bilal; Lewis Nkenyereye; Shahen Shah; Long D. Nguyen; S. M. Riazul Islam; Kyung-Sup Kwak

In post-disaster scenarios, such as after floods, earthquakes, and in war zones, the cellular communication infrastructure may be destroyed or seriously disrupted. In such emergency scenarios, it becomes very important for first aid responders to communicate with other rescue teams in order to provide feedback to both the central office and the disaster survivors. To address this issue, rapidly deployable systems are required to re-establish connectivity and assist users and first responders in the region of incident. In this work, we describe the design, implementation, and evaluation of a rapidly deployable system for first response applications in post-disaster situations, named RDSP. The proposed system helps early rescue responders and victims by sharing their location information to remotely located servers by utilizing a novel routing scheme. This novel routing scheme consists of the Dynamic ID Assignment (DIA) algorithm and the Minimum Maximum Neighbor (MMN) algorithm. The DIA algorithm is used by relay devices to dynamically select their IDs on the basis of all the available IDs of networks. Whereas, the MMN algorithm is used by the client and relay devices to dynamically select their next neighbor relays for the transmission of messages. The RDSP contains three devices; the client device sends the victim's location information to the server, the relay device relays information between client and server device, the server device receives messages from the client device to alert the rescue team. We deployed and evaluated our system in the outdoor environment of the university campus. The experimental results show that the RDSP system reduces the message delivery delay and improves the message delivery ratio with lower communication overhead.

更新日期：2020-01-22
• arXiv.cs.NI Pub Date : 2020-01-20
Andrew Mackey; Petros Spachos; Konstantinos N. Plataniotis

Urban centers and dense populations are expanding, hence, there is a growing demand for novel applications to aid in planning and optimization. In this work, a smart parking system that operates both indoor and outdoor is introduced. The system is based on Bluetooth Low Energy (BLE) beacons and uses particle filtering to improve its accuracy. Through simple BLE connectivity with smartphones, an intuitive parking system is designed and deployed. The proposed system pairs each spot with a unique BLE beacon, providing users with guidance to free parking spaces and a secure and automated payment scheme based on real-time usage of the parking space. Three sets of experiments were conducted to examine different aspects of the system. A particle filter is implemented in order to increase the system performance and improve the credence of the results. Through extensive experimentation in both indoor and outdoor parking spaces, the system was able to correctly predict which spot the user has parked in, as well as estimate the distance of the user from the beacon.

更新日期：2020-01-22
• arXiv.cs.NI Pub Date : 2020-01-21
Lalhruaizela Chhangte; Emanuele Viterbo; D Manjunath; Nikhil Karamchandani

Video content delivery at the wireless edge continues to be challenged by insufficient bandwidth and highly dynamic user behavior which affects both effective throughput and latency. Caching at the network edge and coded transmissions have been found to improve user performance of video content delivery. The cache at the wireless edge stations (BSs, APs) and at the users' end devices can be populated by pre-caching content or by using online caching policies. In this paper, we propose a system where content is cached at the user of a WiFi network via online caching policies, and coded delivery is employed by the WiFi AP to deliver the requested content to the user population. The content of the cache at the user serves as side information for index coding. We also propose the LFU-Index cache replacement policy at the user that demonstrably improves index coding opportunities at the WiFi AP for the proposed system. Through an extensive simulation study, we determine the gains achieved by caching and index by coding. Next, we analyze the tradeoffs between them in terms of data transmitted, latency, and throughput for different content request behaviors from the users. We also show that the proposed cache replacement policy performs better than traditional cache replacement policies like LRU and LFU.

更新日期：2020-01-22
• arXiv.cs.NI Pub Date : 2019-05-11
Ireneusz Szcześniak; Ireneusz Olszewski; Bożena Woźna-Szcześniak

We present an efficient and exact algorithm for dynamic routing with dedicated path protection. We present the algorithm in the setting of optical networks, but it should be applicable to other networks too, where services have to be protected, and where the network resources are finite and discrete, e.g., wireless radio or optical networks, or networks capable of advance resource reservation. The algorithm is efficient, because it can solve large problems, and it is exact, because its results are optimal. To the best of our knowledge, we are the first to solve efficiently and exactly this 30-year old problem, which was considered intractable. Network operations, management, and control require efficient and exact algorithms, especially now, when networks get more performant, reliable, softwarized, dense, and agile, and when the return on investment is crucial. The proposed algorithm uses our generic Dijkstra algorithm on a search graph generated "on-the-fly" based on the input graph. We corroborated the optimality of the results of the proposed algorithm with the brute-force enumeration. We present the simulation results of the dedicated-path protection with signal modulation constraints for the elastic optical networks of three sizes: 25, 50 and 100 nodes, and three numbers of spectrum units: 160, 320, and 640. We also compare the bandwidth blocking probability with the commonly-used edge-exclusion algorithm. We had 48600 simulation runs with about 41 million searches.

更新日期：2020-01-22
• arXiv.cs.NI Pub Date : 2019-06-11
Jonas Fritzsch; Justus Bogner; Stefan Wagner; Alfred Zimmermann

To remain competitive in a fast changing environment, many companies started to migrate their legacy applications towards a Microservices architecture. Such extensive migration processes require careful planning and consideration of implications and challenges likewise. In this regard, hands-on experiences from industry practice are still rare. To fill this gap in scientific literature, we contribute a qualitative study on intentions, strategies, and challenges in the context of migrations to Microservices. We investigated the migration process of 14 systems across different domains and sizes by conducting 16 in-depth interviews with software professionals from 10 companies. We present a separate description of each case and summarize the most important findings. As primary migration drivers, maintainability and scalability were identified. Due to the high complexity of their legacy systems, most companies preferred a rewrite using current technologies over splitting up existing code bases. This was often caused by the absence of a suitable decomposition approach. As such, finding the right service cut was a major technical challenge, next to building the necessary expertise with new technologies. Organizational challenges were especially related to large, traditional companies that simultaneously established agile processes. Initiating a mindset change and ensuring smooth collaboration between teams were crucial for them. Future research on the evolution of software systems will in particular profit from the individual cases presented.

更新日期：2020-01-22
• arXiv.cs.NI Pub Date : 2019-07-11
Rahif Kassab; Osvaldo Simeone; Petar Popovski

A multi-cell Fog-Radio Access Network (F-RAN) architecture is considered in which Internet of Things (IoT) devices periodically make noisy observations of a Quantity of Interest (QoI) and transmit using grant-free access in the uplink. The devices in each cell are connected to an Edge Node (EN), which may also have a finite-capacity fronthaul link to a central processor. In contrast to conventional information-agnostic protocols, the devices transmit using a Type-Based Multiple Access (TBMA) protocol that is tailored to enable the estimate of the field of correlated QoIs in each cell based on the measurements received from IoT devices. In this paper, this form of information-centric radio access is studied for the first time in a multi-cell F-RAN model with edge or cloud detection. Edge and cloud detection are designed and compared for a multi-cell system. Optimal model-based detectors are introduced and the resulting asymptotic behavior of the probability of error at cloud and edge is derived. Then, for the scenario in which a statistical model is not available, data-driven edge and cloud detectors are discussed and evaluated in numerical results.

更新日期：2020-01-22
• arXiv.cs.NI Pub Date : 2019-07-17
Abdulaziz Alashaikh; Eisa Alanazi; Ala Al-Fuqaha

With the rapid development of virtualization techniques, cloud data centers allow for cost effective, flexible, and customizable deployments of applications on virtualized infrastructure. Virtual machine (VM) placement aims to assign each virtual machine to a server in the cloud environment. VM Placement is of paramount importance to the design of cloud data centers. Typically, VM placement involves complex relations and multiple design factors as well as local policies that govern the assignment decisions. It also involves different constituents including cloud administrators and customers that might have disparate preferences while opting for a placement solution. Thus, it is often valuable to not only return an optimized solution to the VM placement problem but also a solution that reflects the given preferences of the constituents. In this paper, we provide a detailed review on the role of preferences in the recent literature on VM placement. We further discuss key challenges and identify possible research opportunities to better incorporate preferences within the context of VM placement.

更新日期：2020-01-22
• arXiv.cs.NI Pub Date : 2020-01-16
Mohamed Abushwereb; Muhannad Mustafa; Mouhammd Al-kasassbeh; Malik Qasaimeh

One of the most common internet attacks causing significant economic losses in recent years is the Denial of Service (DoS) flooding attack. As a countermeasure, intrusion detection systems equipped with machine learning classification algorithms were developed to detect anomalies in network traffic. These classification algorithms had varying degrees of success, depending on the type of DoS attack used. In this paper, we use an SNMP-MIB dataset from real testbed to explore the most prominent DoS attacks and the chances of their detection based on the classification algorithm used. The results show that most DOS attacks used nowadays can be detected with high accuracy using machine learning classification techniques based on features provided by SNMP-MIB. We also conclude that of all the attacks we studied, the Slowloris attack had the highest detection rate, on the other hand TCP-SYN had the lowest detection rate throughout all classification techniques, despite being one of the most used DoS attacks.

更新日期：2020-01-17
• arXiv.cs.NI Pub Date : 2020-01-16
Guangxu Zhu; Yuqing Du; Deniz Gunduz; Kaibin Huang

Federated edge learning (FEEL) is a popular framework for model training at an edge server using data distributed at edge devices (e.g., smart-phones and sensors) without compromising their privacy. In the FEEL framework, edge devices periodically transmit high-dimensional stochastic gradients to the edge server, where these gradients are aggregated and used to update a global model. When the edge devices share the same communication medium, the multiple access channel from the devices to the edge server induces a communication bottleneck. To overcome this bottleneck, an efficient broadband analog transmission scheme has been recently proposed, featuring the aggregation of analog modulated gradients (or local models) via the waveform-superposition property of the wireless medium. However, the assumed linear analog modulation makes it difficult to deploy this technique in modern wireless systems that exclusively use digital modulation. To address this issue, we propose in this work a novel digital version of broadband over-the-air aggregation, called one-bit broadband digital aggregation (OBDA). The new scheme features one-bit gradient quantization followed by digital modulation at the edge devices and a majority-voting based decoding at the edge server. We develop a comprehensive analysis framework for quantifying the effects of wireless channel hostilities (channel noise, fading, and channel estimation errors) on the convergence rate. The analysis shows that the hostilities slow down the convergence of the learning process by introducing a scaling factor and a bias term into the gradient norm. However, we show that all the negative effects vanish as the number of participating devices grows, but at a different rate for each type of channel hostility.

更新日期：2020-01-17
• arXiv.cs.NI Pub Date : 2020-01-16
Liesbet Van der Perre; Erik G. Larsson; Fredrik Tufvesson; Lieven De Strycker; Emil Björnson; Ove Edfors

We present a new type of wireless access infras-tructure consisting of a fabric of dispersed electronic circuitsand antennas that collectively function as a massive, distributed antenna array. We have chosen to name this new wireless infrastructure 'RadioWeaves' and anticipate they can be integrated into indoor and outdoor walls, furniture, and otherobjects, rendering them a natural part of the environment. Technologically, RadioWeaves will deploy distributed arrays to create both favorable propagation and antenna array interaction. The technology leverages on the ideas of large-scale intelligent surfaces and cell-free wireless access. Offering close to the service connectivity and computing, new grades in energy efficiency,reliability, and low latency can be reached. The new concept moreover can be scaled up easily to offer a very high capacity inspecific areas demanding so. In this paper we anticipate how two different demanding use cases can be served well by a dedicated RadioWeaves deployment: a crowd scenario and a highly reflective factory environment. A practical approach towards a RadioWeaves prototype, integrating dispersed electronics invisibly in a room environment, is introduced. We outline diverse R\&D challenges that need to be addressed to realize the great potential of the RadioWeaves technology.

更新日期：2020-01-17
• arXiv.cs.NI Pub Date : 2020-01-16
Mohammed Salman; Bin Wang

In this paper, we present a new traffic engineering (TE) software framework to analyze, configure, and optimize (with the aid of a linear programming solver) a network for service provisioning. The developed software tool is based on our new data-driven traffic engineering approach that analyzes a large volume of network configuration data generated given the user input. By analyzing the data, one can then make efficient decisions later when designing a traffic engineering solution. We focus on three well-known traffic engineering objective functions: minimum cost routing (MCR), load balancing (LB), and average delay (AD). With this new tool, one can answer numerous traffic engineering questions. For example, what are the differences among the three objective functions? What is the impact of an objective function on link utilization? How many candidate paths are enough to achieve optimality or near-optimality with respect to a specific objective. This new software tool allows us to conveniently perform various experiments and visualize the results for performance analysis. As case studies, this paper presents examples that answer the questions for two traffic engineering problems: (1) how many paths are required to obtain a solution that is within a few percent from the optimal solution and whether that number is fixed for any network size? (2) how the choice of single-path/multi-path routing affects the load in the network? For the first problem, it turns out that the number of paths needed to achieve optimality increases as the number of links in the network increases.

更新日期：2020-01-17
• arXiv.cs.NI Pub Date : 2018-11-20
Kostantinos Papadamou; Savvas Zannettou; Bogdan Chifor; Sorin Teican; George Gugulea; Annamaria Recupero; Alberto Caponi; Claudio Pisa; Giuseppe Bianchi; Steven Gevers; Christos Xenakis; Michael Sirivianos

Current authentication methods on the Web have serious weaknesses. First, services heavily rely on the traditional password paradigm, which diminishes the end-users' security and usability. Second, the lack of attribute-based authentication does not allow anonymity-preserving access to services. Third, users have multiple online accounts that often reflect distinct identity aspects. This makes proving combinations of identity attributes hard on the users. In this paper, we address these weaknesses by proposing a privacy-preserving architecture for device-centric and attribute-based authentication based on: 1) the seamless integration between usable/strong device-centric authentication methods and federated login solutions; 2) the separation of the concerns for Authorization, Authentication, Behavioral Authentication and Identification to facilitate incremental deployability, wide adoption and compliance with NIST assurance levels; and 3) a novel centralized component that allows end-users to perform identity profile and consent management, to prove combinations of fragmented identity aspects, and to perform account recovery in case of device loss. To the best of our knowledge, this is the first effort towards fusing the aforementioned techniques under an integrated architecture. This architecture effectively deems the password paradigm obsolete with minimal modification on the service provider's software stack.

更新日期：2020-01-17
• arXiv.cs.NI Pub Date : 2019-04-15
Lemei Huang; Sheng Cheng; Yu Guan; Xinggong Zhang; Zongming Guo

Cache-equipped Base-Stations (CBSs) is an attractive alternative to offload the rapidly growing backhaul traffic in a mobile network. New 5G technology and dense femtocell enable one user to connect to multiple base-stations simultaneously. Practical implementation requires the caches in BSs to be regarded as a cache server, but few of the existing works considered how to offload traffic, or how to schedule HTTP requests to CBSs. In this work, we propose a DNS-based HTTP traffic allocation framework. It schedules user traffic among multiple CBSs by DNS resolution, with the consideration of load-balancing, traffic allocation consistency and scheduling granularity of DNS. To address these issues, we formulate the user-traffic allocation problem in DNS-based mobile edge caching, aiming at maximizing QoS gain and allocation consistency while maintaining load balance. Then we present a simple greedy algorithm which gives a more consistent solution when user-traffic changes dynamically. Theoretical analysis proves that it is within 3/4 of the optimal solution. Extensive evaluations in numerical and trace-driven situations show that the greedy algorithm can avoid about 50% unnecessary shift in user-traffic allocation, yield more stable cache hit ratio and balance the load between CBSs without losing much of the QoS gain.

更新日期：2020-01-17
• arXiv.cs.NI Pub Date : 2019-07-22
Quang-Trung Luu; Sylvaine Kerboeuf; Alexandre Mouradian; Michel Kieffer

With network slicing in 5G networks, Mobile Network Operators can create various slices for Service Providers (SPs) to accommodate customized services. Usually, the various Service Function Chains (SFCs) belonging to a slice are deployed on a best-effort basis. Nothing ensures that the Infrastructure Provider (InP) will be able to allocate enough resources to cope with the increasing demands of some SP. Moreover, in many situations, slices have to be deployed over some geographical area: coverage as well as minimum per-user rate constraints have then to be taken into account. This paper takes the InP perspective and proposes a slice resource provisioning approach to cope with multiple slice demands in terms of computing, storage, coverage, and rate constraints.The resource requirements of the various SFCs within a slice are aggregated within a graph of Slice Resource Demands (SRD). Infrastructure nodes and links have then to be provisioned so as to satisfy all SRDs. This problem leads to a Mixed Integer Linear Programming formulation. A two-step approach is considered, with several variants, depending on whether the constraints of each slice to be provisioned are taken into account sequentially or jointly. Once provisioning has been performed, any slice deployment strategy may be considered on the reduced-size infrastructure graph on which resources have been provisioned. Simulation results demonstrate the effectiveness of the proposed approach compared to a more classical direct slice embedding approach.

更新日期：2020-01-17
• arXiv.cs.NI Pub Date : 2019-12-03
Xingran Chen; Konstantinos Gatsis; Hamed Hassani; Shirin Saeedi Bidokhti

In applications of remote sensing, estimation, and control, timely communication is not always ensured by high-rate communication. This work proposes distributed age-efficient transmission policies for random access channels with $M$ transmitters. In the first part of this work, we analyze the age performance of stationary randomized policies by relating the problem of finding age to the absorption time of a related Markov chain. In the second part of this work, we propose the notion of \emph{age-gain} of a packet to quantify how much the packet will reduce the instantaneous age of information at the receiver side upon successful delivery. We then utilize this notion to propose a transmission policy in which transmitters act in a distributed manner based on the age-gain of their available packets. In particular, each transmitter sends its latest packet only if its corresponding age-gain is beyond a certain threshold which could be computed adaptively using the collision feedback or found as a fixed value analytically in advance. Both methods improve age of information significantly compared to the state of the art. In the limit of large $M$, we prove that when the arrival rate is small (below $\frac{1}{eM}$), slotted ALOHA-type algorithms are asymptotically optimal. As the arrival rate increases beyond $\frac{1}{eM}$, while age increases under slotted ALOHA, it decreases significantly under the proposed age-based policies. For arrival rates $\theta$, $\theta=\frac{1}{o(M)}$, the proposed algorithms provide a multiplicative factor of at least two compared to the minimum age under slotted ALOHA (minimum over all arrival rates). We conclude that, as opposed to the common practice, it is beneficial to increase the sampling rate (and hence the arrival rate) and transmit packets selectively based on their age-gain.

更新日期：2020-01-17
• arXiv.cs.NI Pub Date : 2020-01-14
Mahdi Soltani; Mahdi Jafari Siavoshani; Amir Hossein Jahangir

By growing the number of Internet users and the prevalence of web applications, we have to deal with very complex software and applications in the network. This results in an increasing number of new vulnerabilities in the systems, which consequently leads to an increase in the cyber and, in particular, zero-day attacks. The cost of generating appropriate signatures for these attacks is a potential motive for using machine learning-based methodologies. Although there exist many studies on the use of learning-based methods for attack detection, they generally use extracted features and overlook raw contents. This approach can lessen the performance of detection systems against content-based attacks like SQL injection, Cross-site Scripting (XSS), and various viruses. As a new paradigm, in this work, we propose a scheme, called deep intrusion detection (DID) system that uses the pure content of traffic flows in addition to traffic metadata in the learning and detection phases. To this end, we employ deep learning techniques recently developed in the machine learning community. Due to the inherent nature of deep learning, it can process high dimensional data content and, accordingly, discover the sophisticated relations between the auto extracted features of the traffic. To evaluate the proposed DID system, we use the ISCX IDS 2017 dataset. The evaluation metrics, such as precision and recall, reach $0.992$ and $0.998$, respectively, which show the high performance of the proposed DID method.

更新日期：2020-01-16
• arXiv.cs.NI Pub Date : 2020-01-15
Hatem Alharbi; Taisir E. H. Elgorashi; Jaafar M. H. Elmirghani

Fog computing is an emerging paradigm that aims to improve the efficiency and QoS of cloud computing by extending the cloud to the edge of the network. This paper develops a comprehensive energy efficiency analysis framework based on mathematical modeling and heuristics to study the offloading of virtual machine (VM) services from the cloud to the fog. The analysis addresses the impact of different factors including the traffic between the VM and its users, the VM workload, the workload versus number of users profile and the proximity of fog nodes to users. Overall, the power consumption can be reduced if the VM users traffic is high and/or the VMs have a linear power profile. In such a linear profile case, the creation of multiple VM replicas does not increase the computing power consumption significantly (there may be a slight increase due to idle / baseline power consumption) if the number of users remains constant, however the VM replicas can be brought closer to the end users, thus reducing the transport network power consumption. In our scenario, the optimum placement of VMs over a cloud-fog architecture significantly decreased the total power consumption by 56% and 64% under high user data rates compared to optimized distributed clouds placement and placement in the existing AT&T network cloud locations, respectively.

更新日期：2020-01-16
• arXiv.cs.NI Pub Date : 2020-01-15

In this paper, we consider the problem of scheduling real-time traffic in wireless networks under a conflict-graph interference model and single-hop traffic. The objective is to guarantee that at least a certain fraction of packets of each link are delivered within their deadlines, which is referred to as delivery ratio. This problem has been studied before under restrictive frame-based traffic models, or greedy maximal scheduling schemes like LDF (Largest-Deficit First) that provide poor delivery ratio for general traffic patterns. In this paper, we pursue a different approach through randomization over the choice of maximal links that can transmit at each time. We design randomized policies in collocated networks, multi-partite networks, and general networks, that can achieve delivery ratios much higher than what is achievable by LDF. Further, our results apply to traffic (arrival and deadline) processes that evolve as positive recurrent Markov Chains. Hence, this work is an improvement with respect to both efficiency and traffic assumptions compared to the past work. We further present extensive simulation results over various traffic patterns and interference graphs to illustrate the gains of our randomized policies over LDF variants.

更新日期：2020-01-16
• arXiv.cs.NI Pub Date : 2020-01-15
Pargorn Puttapirat; Haichuan Zhang; Jingyi Deng; Yuxin Dong; Jiangbo Shi; Hongyu He; Zeyu Gao; Chunbao Wang; Xiangrong Zhang; Chen Li

Transition from conventional to digital pathology requires a new category of biomedical informatic infrastructure which could facilitate delicate pathological routine. Pathological diagnoses are sensitive to many external factors and is known to be subjective. Only systems that can meet strict requirements in pathology would be able to run along pathological routines and eventually digitized the study area, and the developed platform should comply with existing pathological routines and international standards. Currently, there are a number of available software tools which can perform histopathological tasks including virtual slide viewing, annotating, and basic image analysis, however, none of them can serve as a digital platform for pathology. Here we describe OpenHI2, an enhanced version Open Histopathological Image platform which is capable of supporting all basic pathological tasks and file formats; ready to be deployed in medical institutions on a standard server environment or cloud computing infrastructure. In this paper, we also describe the development decisions for the platform and propose solutions to overcome technical challenges so that OpenHI2 could be used as a platform for histopathological images. Further addition can be made to the platform since each component is modularized and fully documented. OpenHI2 is free, open-source, and available at https://gitlab.com/BioAI/OpenHI.

更新日期：2020-01-16
• arXiv.cs.NI Pub Date : 2020-01-15
Iacovos Ioannou; Vasos Vassiliou; Christophoros Christophorou; Andreas Pitsillides

Device to Device (D2D) Communication is one of the technology components of the evolving 5G architecture, as it promises improvements in energy efficiency, spectral efficiency, overall system capacity, and higher data rates. The above noted improvements in network performance spearheaded a vast amount of research in D2D, which have identified significant challenges that need to be addressed before realizing their full potential in emerging 5G Networks. Towards this end, this paper proposes the use of a distributed intelligent approach to control the generation of D2D networks. More precisely, the proposed approach uses Belief-Desire-Intention (BDI) intelligent agents with extended capabilities (BDIx) to manage each D2D node independently and autonomously, without the help of the Base Station. The paper includes detailed algorithmic description for the decision of transmission mode, which maximizes the data rate, minimizes the power consumptions, while taking into consideration the computational load. Simulations show the applicability of BDI agents in jointly solving D2D challenges.

更新日期：2020-01-16
• arXiv.cs.NI Pub Date : 2020-01-15
Mari Carmen de Toro; Carlos Borrego

The short-term adoption of opportunistic networks (OppNet) depends on improving the current performance of this type of network. Software-Defined Networks (SDN) architecture is used by Internet applications with high resource demand. SDN technology improves network performance by programmatically managing the network configuration by using a control layer. In this paper, we propose that OppNet nodes use a control layer to get an overview of the whole network and use this knowledge to apply policies to get a better performance of the network. As a use case for our experimentation, we have focused on improving congestion control in OppNet with a control layer that dynamically regulates the replication degree used by forwarding algorithms. We have compared the performance of our proposal with two different configurations of the OppNet, over two community scenarios based on real mobility traces. The results of the test prove that our SDN-like approach overruns the other two approaches in terms of delivery ratio and latency time performance.

更新日期：2020-01-16
• arXiv.cs.NI Pub Date : 2020-01-15
Jo Inge Arnes; Randi Karlsen

Using smartphones for peer-to-peer communication over the Internet is difficult without the aid of centralized services. These centralized services, which usually reside in the cloud, are necessary for brokering communication between peers, and all communication must pass through them. A reason for this is that smartphones lack publicly reachable IP addresses. Also, because people carry their smartphones with them, smartphones will often disconnect from one network and connect to another. Smartphones can also go offline. Additionally, a network of trusted peers (or friends) requires a directory of known peers, authentication mechanisms, and secure communication channels. In this paper, we propose a peer-to-peer middleware that provides these features without the need for centralized services.

更新日期：2020-01-16
• arXiv.cs.NI Pub Date : 2020-01-15
Benjamin Sliwa; Christian Wietfeld

Vehicular crowdsensing is anticipated to become a key catalyst for data-driven optimization in the Intelligent Transportation System (ITS) domain. Yet, the expected growth in massive Machine-type Communication (mMTC) caused by vehicle-to-cloud transmissions will confront the cellular network infrastructure with great capacity-related challenges. A cognitive way for achieving relief without introducing additional physical infrastructure is the application of opportunistic data transfer for delay-tolerant applications. Hereby, the clients schedule their data transmissions in a channel-aware manner in order to avoid retransmissions and interference with other cell users. In this paper, we introduce a novel approach for this type of resourceaware data transfer which brings together supervised learning for network quality prediction with reinforcement learningbased decision making. The performance evaluation is carried out using data-driven network simulation and real world experiments in the public cellular networks of multiple Mobile Network Operators (MNOs) in different scenarios. The proposed transmission scheme significantly outperforms state-of-the-art probabilistic approaches in most scenarios and achieves data rate improvements of up to 181% in uplink and up to 270% in downlink transmission direction in comparison to conventional periodic data transfer.

更新日期：2020-01-16
• arXiv.cs.NI Pub Date : 2020-01-15
Ronshee Chawla; Abishek Sankararaman; Ayalvadi Ganesh; Sanjay Shakkottai

We consider a decentralized multi-agent Multi Armed Bandit (MAB) setup consisting of $N$ agents, solving the same MAB instance to minimize individual cumulative regret. In our model, agents collaborate by exchanging messages through pairwise gossip style communications. We develop two novel algorithms, where each agent only plays from a subset of all the arms. Agents use the communication medium to recommend only arm-IDs (not samples), and thus update the set of arms from which they play. We establish that, if agents communicate $\Omega(\log(T))$ times through any connected pairwise gossip mechanism, then every agent's regret is a factor of order $N$ smaller compared to the case of no collaborations. Furthermore, we show that the communication constraints only have a second order effect on the regret of our algorithm. We then analyze this second order term of the regret to derive bounds on the regret-communication tradeoffs. Finally, we empirically evaluate our algorithm and conclude that the insights are fundamental and not artifacts of our bounds. We also show a lower bound which gives that the regret scaling obtained by our algorithm cannot be improved even in the absence of any communication constraints. Our results demonstrate that even a minimal level of collaboration among agents greatly reduces regret for all agents.

更新日期：2020-01-16
• arXiv.cs.NI Pub Date : 2020-01-13
Merima Kulin; Tarik Kazaz; Ingrid Moerman; Eli de Poorter

This paper provides a systematic and comprehensive survey that reviews the latest research efforts focused on machine learning (ML) based performance improvement of wireless networks, while considering all layers of the protocol stack (PHY, MAC and network). First, the related work and paper contributions are discussed, followed by providing the necessary background on data-driven approaches and machine learning for non-machine learning experts. for non-machine learning experts to understand all discussed techniques. Then, a comprehensive review is presented on works employing ML-based approaches to optimize the wireless communication parameters settings to achieve improved network quality-of-service (QoS) and quality-of-experience (QoE). We first categorize these works into: radio analysis, MAC analysis and network prediction approaches, followed by subcategories within each. Finally, open challenges and broader perspectives are discussed.

更新日期：2020-01-15
• arXiv.cs.NI Pub Date : 2020-01-14

Observability is a fundamental concept in system inference and estimation. This paper is focused on structural observability analysis of Cartesian product networks. Cartesian product networks emerge in variety of applications including in parallel and distributed systems. We provide a structural approach to extend the structural observability of the constituent networks (referred as the factor networks) to that of the Cartesian product network. The structural approach is based on graph theory and is generic. We introduce certain structures which are tightly related to structural observability of networks, namely parent Strongly-Connected-Component (parent SCC), parent node, and contractions. The results show that for particular type of networks (e.g. the networks containing contractions) the structural observability of the factor network can be recovered via Cartesian product. In other words, if one of the factor networks is structurally rank-deficient, using the other factor network containing a spanning cycle family, then the Cartesian product of the two nwtworks is structurally full-rank. We define certain network structures for structural observability recovery. On the other hand, we derive the number of observer nodes--the node whose state is measured by an output-- in the Cartesian product network based on the number of observer nodes in the factor networks. An example illustrates the graph-theoretic analysis in the paper.

更新日期：2020-01-15
• arXiv.cs.NI Pub Date : 2020-01-14
Natale Patriciello; Sanjay Goyaly; Sandra Lagen; Lorenza Giupponi; Biljana Bojovic; Alpaslan Demir; Mihaela Beluri

In December 2019, the 3GPP defined the road-map for Release-17, which includes new features on the operation of New Radio (NR) in millimeter-wave bands with highly directional communications systems, i.e., up to 52.6 GHz. In this paper, a system-level simulation based study on the coexistence of NR-based access to unlicensed spectrum (NR-U) and an IEEE technology, i.e., 802.11ad Wireless Gigabit (WiGig), at 60 GHz bands is conducted. For NR-U, an extension of NR Release-15 based model is used such that the 60 GHz regulatory requirements are satisfied. First, the design and capabilities of the developed open source ns-3 based simulator are presented and then end-to-end performance results of coexistence with different channel access mechanisms for NR-U in a 3GPP indoor scenario are discussed. It is shown that NR-U with Listen-Before-Talk channel access mechanism does not have any adverse impact on WiGig performance in terms of throughput and latency, which demonstrates that NR-U design fulfills the fairness coexistence objective, i.e., NR-U and WiGig coexistence is proven to be feasible.

更新日期：2020-01-15
• arXiv.cs.NI Pub Date : 2020-01-14
He Chen; Yifan Gu; Soung-Chang Liew

As the most well-known application of the Internet of Things (IoT), remote monitoring is now pervasive. In these monitoring applications, information usually has a higher value when it is fresher. A new metric, termed the age of information (AoI), has recently been proposed to quantify the information freshness in various IoT applications. This paper concentrates on the design and analysis of age-oriented random access for massive IoT networks. Specifically, we devise a new stationary threshold-based age-dependent random access (ADRA) protocol, in which each IoT device accesses the channel with a certain probability only its instantaneous AoI exceeds a predetermined threshold. We manage to evaluate the average AoI of the proposed ADRA protocol mathematically by decoupling the tangled AoI evolution of multiple IoT devices and modelling the decoupled AoI evolution of each device as a Discrete-Time Markov Chain. Simulation results validate our theoretical analysis and affirm the superior age performance of the proposed ADRA protocol over the conventional age-independent scheme.

更新日期：2020-01-15
• arXiv.cs.NI Pub Date : 2020-01-10
Eugene Grayver; Alexander Utter

Software defined radio is a widely accepted paradigm for design of reconfigurable modems. The continuing march of Moore's law makes real-time signal processing on general purpose processors feasible for a large set of waveforms. Data rates in the low Mbps can be processed on low-power ARM processors, and much higher data rates can be supported on large x86 processors. The advantages of all-software development (vs. FPGA/DSP/GPU) are compelling: much wider pool of talent, lower development time and cost, and easier maintenance and porting. However, very high-rate systems (above 100 Mbps) are still firmly in the domain of custom and semi-custom hardware (mostly FPGAs). In this paper we describe an architecture and testbed for an SDR that can be easily scaled to support over 3 GHz of bandwidth and data rate up to 10 Gbps. The paper covers a novel technique to parallelize typically serial algorithms for phase and symbol tracking, followed by a discussion of data distribution for a massively parallel architecture. We provide a brief description of a mixed-signal front end and conclude with measurement results. To the best of the author's knowledge, the system described in this paper is an order of magnitude faster than any prior published result.

更新日期：2020-01-14
• arXiv.cs.NI Pub Date : 2020-01-10
Ali Parchekani; Salar Nouri Naghadeh; Vahid Shah-Mansouri

Traffic flows are set of packets transferring between a client and a server with the same set of source and destination IP and port numbers. Traffic classification is referred to as the task of categorizing traffic flows into application-aware classes such as chats, streaming, VoIP, etc. Classification can be used for several purposes including policy enforcement and control or QoS management. In this paper, we introduce a novel end-to-end traffic classification method to distinguish between traffic classes including VPN traffic. Classification of VPN traffic is not trivial using traditional classification approaches due to its encrypted nature. We utilize two well-known neural networks, namely multi-layer perceptron and recurrent neural network focused on two metrics: class scores and distance from the center of the classes. Such approaches combined extraction, selection, and classification functionality into a single end-to-end system to systematically learn the non-linear relationship between input and predicted performance. Therefore, we could distinguish VPN traffics from Non-VPN traffics by rejecting the unrelated features of the VPN class. Moreover, obtain the application of Non-VPN traffics at the same time. The approach is evaluated using the general traffic dataset ISCX VPN-nonVPN and the acquired real dataset. The results of the analysis demonstrate that our proposed model fulfills the realistic project's criterion for precision.

更新日期：2020-01-14
• arXiv.cs.NI Pub Date : 2020-01-12
Xianzhe Xu; Meixia Tao; Cong Shen

This paper investigates learning-based caching in small-cell networks (SCNs) when user preference is unknown. The goal is to optimize the cache placement in each small base station (SBS) for minimizing the system long-term transmission delay. We model this sequential multi-agent decision making problem in a multi-agent multi-armed bandit (MAMAB) perspective. Rather than estimating user preference first and then optimizing the cache strategy, we propose several MAMAB-based algorithms to directly learn the cache strategy online in both stationary and non-stationary environment. In the stationary environment, we first propose two high-complexity agent-based collaborative MAMAB algorithms with performance guarantee. Then we propose a low-complexity distributed MAMAB which ignores the SBS coordination. To achieve a better balance between SBS coordination gain and computational complexity, we develop an edge-based collaborative MAMAB with the coordination graph edge-based reward assignment method. In the non-stationary environment, we modify the MAMAB-based algorithms proposed in the stationary environment by proposing a practical initialization method and designing new perturbed terms to adapt to the dynamic environment. Simulation results are provided to validate the effectiveness of our proposed algorithms. The effects of different parameters on caching performance are also discussed.

更新日期：2020-01-14
• arXiv.cs.NI Pub Date : 2020-01-12
Xin Zhang; Minghong Fang; Jia Liu; Zhengyuan Zhu

With rise of machine learning (ML) and the proliferation of smart mobile devices, recent years have witnessed a surge of interest in performing ML in wireless edge networks. In this paper, we consider the problem of jointly improving data privacy and communication efficiency of distributed edge learning, both of which are critical performance metrics in wireless edge network computing. Toward this end, we propose a new decentralized stochastic gradient method with sparse differential Gaussian-masked stochastic gradients (SDM-DSGD) for non-convex distributed edge learning. Our main contributions are three-fold: i) We theoretically establish the privacy and communication efficiency performance guarantee of our SDM-DSGD method, which outperforms all existing works; ii) We show that SDM-DSGD improves the fundamental training-privacy trade-off by {\em two orders of magnitude} compared with the state-of-the-art. iii) We reveal theoretical insights and offer practical design guidelines for the interactions between privacy preservation and communication efficiency, two conflicting performance goals. We conduct extensive experiments with a variety of learning models on MNIST and CIFAR-10 datasets to verify our theoretical findings. Collectively, our results contribute to the theory and algorithm design for distributed edge learning.

更新日期：2020-01-14
• arXiv.cs.NI Pub Date : 2020-01-12
Amin Farajzadeh; Ozgur Ercetin; Halim Yanikomeroglu

Future intelligent systems will consist of a massive number of battery-less sensors, where quick and accurate aggregation of sensor data will be of paramount importance. Over-the-air computation (AirComp) is a promising technology wherein sensors concurrently transmit their measurements over the wireless channel, and a reader receives the noisy version of a function of measurements due to the superposition property. A key challenge in AirComp is the accurate power alignment of individual transmissions, addressed previously by using conventional precoding methods. In this paper, we investigate a UAVenabled backscatter communication framework, wherein UAV acts both as a power emitter and reader. The mobility of the reader is leveraged to replace the complicated precoding at sensors, where UAV first collects sum channel gains in the first flyover, and then, use these to estimate the actual aggregated sensor data in the second flyover. Our results demonstrate improvements of up to 10 dB in MSE compared to that of a benchmark case where UAV is incognizant of sum channel gains.

更新日期：2020-01-14
• arXiv.cs.NI Pub Date : 2020-01-12
Minsung Kim; Davide Venturelli; Kyle Jamieson

User demand for increasing amounts of wireless capacity continues to outpace supply, and so to meet this demand, significant progress has been made in new MIMO wireless physical layer techniques. Higher-performance systems now remain impractical largely only because their algorithms are extremely computationally demanding. For optimal performance, an amount of computation that increases at an exponential rate both with the number of users and with the data rate of each user is often required. The base station's computational capacity is thus becoming one of the key limiting factors on wireless capacity. QuAMax is the first large MIMO centralized radio access network design to address this issue by leveraging quantum annealing on the problem. We have implemented QuAMax on the 2,031 qubit D-Wave 2000Q quantum annealer, the state-of-the-art in the field. Our experimental results evaluate that implementation on real and synthetic MIMO channel traces, showing that 10~$\mu$s of compute time on the 2000Q can enable 48 user, 48 AP antenna BPSK communication at 20 dB SNR with a bit error rate of $10^{-6}$ and a 1,500 byte frame error rate of $10^{-4}$.

更新日期：2020-01-14
• arXiv.cs.NI Pub Date : 2020-01-13
Qian Wang; He Chen; Yonghui Li; Branka Vucetic

This paper considers a wireless network with a base station (BS) conducting timely transmission to two clients in a slotted manner via hybrid non-orthogonal multiple access (NOMA)/orthogonal multiple access (OMA). Specifically, the BS is able to adaptively switch between NOMA and OMA for the downlink transmission to minimize the information freshness, characterized by Age of Information (AoI), of the network. If the BS chooses OMA, it can only serve one client within a time slot and should decide which client to serve; if the BS chooses NOMA, it can serve both clients simultaneously and should decide the power allocated to each client. To minimize the weighted sum of expected AoI of the network, we formulate a Markov Decision Process (MDP) problem and develop an optimal policy for the BS to decide whether to use NOMA or OMA for each downlink transmission based on the instantaneous AoI of both clients. We prove the existence of optimal stationary and deterministic policy, and perform action elimination to reduce the action space for lower computation complexity. The optimal policy is shown to have a switching-type property with obvious decision switching boundaries. A suboptimal policy with lower computation complexity is also devised, which can achieve near-optimal performance according to our simulation results. The performance of different policies under different system settings is compared and analyzed in numerical results to provide useful insights for practical system designs.

更新日期：2020-01-14
• arXiv.cs.NI Pub Date : 2020-01-13
Tianxiang Tan; Guohong Cao

Many mobile applications have been developed to apply deep learning for video analytics. Although these advanced deep learning models can provide us with better results, they also suffer from the high computational overhead which means longer delay and more energy consumption when running on mobile devices.To address this issue, we propose a framework called FastVA, which supports deep learning video analytics through edge processing and Neural Processing Unit (NPU) in mobile. The major challenge is to determine when to offload the computation and when to use NPU. Based on the processing time and accuracy requirement of the mobile application, we study two problems: Max-Accuracy where the goal is to maximize the accuracy under some time constraints, and Max-Utility where the goal is to maximize the utility which is a weighted function of processing time and accuracy. We formulate them as integer programming problems and propose heuristics based solutions. We have implemented FastVA on smartphones and demonstrated its effectiveness through extensive evaluations.

更新日期：2020-01-14
• arXiv.cs.NI Pub Date : 2020-01-13
Ivan Wang-Hei Ho; Sid Chi-Kin Chau; Elmer R. Magsino; Kanghao Jia

更新日期：2020-01-14
• arXiv.cs.NI Pub Date : 2020-01-13
Bohai Li; He Chen; Yong Zhou; Yonghui Li

This paper considers a cooperative status update system with a source aiming to send randomly generated status updates to a designated destination as timely as possible with the help of a relay. We adopt a recently proposed concept, Age of Information (AoI), to characterize the timeliness of the status updates. We propose an age-oriented opportunistic relaying (AoR) protocol to reduce the AoI of the considered system. Specifically, the relay opportunistically replaces the source to retransmit the successfully received status updates that have not been correctly delivered to the destination, but the retransmission of the relay can be preempted by the arrival of a new status update at the source. By carefully analyzing the evolution of AoI, we derive a closed-form expression of the average AoI for the proposed AoR protocol. We further minimize the average AoI by optimizing the generation probability of the status updates at the source. Simulation results validate our theoretical analysis and demonstrate that the average AoI performance of the proposed AoR protocol is superior to that of the non-cooperative system.

更新日期：2020-01-14
• arXiv.cs.NI Pub Date : 2020-01-13
Peng Yang; Xing Xi; Tony Q. S. Quek; Jingxuan Chen; Xianbin Cao; Dapeng Wu

The radio access network (RAN) is regarded as one of the potential proposals for massive Internet of Things (mIoT), where the random access channel (RACH) procedure should be exploited for IoT devices to access to the RAN. However, modelling of the dynamic process of RACH of mIoT devices is challenging. To address this challenge, we first revisit the frame and minislot structure of the RAN. Then, we correlate the RACH request of an IoT device with its queue status and analyze the queue evolution process. Based on the analysis result, we derive the closed-form expression of the RA success probability of the device. Besides, considering the agreement on converging different services onto a shared infrastructure, we further investigate the RAN slicing for mIoT and bursty ultra-reliable and low latency communications (URLLC) service multiplexing. Specifically, we formulate the RAN slicing problem as an optimization one aiming at optimally orchestrating RAN resources for mIoT slices and bursty URLLC slices to maximize the RA success probability and energy-efficiently satisfy bursty URLLC slices' quality-of-service (QoS) requirements. A slice resource optimization (SRO) algorithm exploiting relaxation and approximation with provable tightness and error bound is then proposed to mitigate the optimization problem.

更新日期：2020-01-14
• arXiv.cs.NI Pub Date : 2020-01-13
Faheem Zafari; Prithwish Basu; Kin K. Leung; Jian Li; Ananthram Swami; Don Towsley

The growing demand for edge computing resources, particularly due to increasing popularity of Internet of Things (IoT), and distributed machine/deep learning applications poses a significant challenge. On the one hand, certain edge service providers (ESPs) may not have sufficient resources to satisfy their applications according to the associated service-level agreements. On the other hand, some ESPs may have additional unused resources. In this paper, we propose a resource-sharing framework that allows different ESPs to optimally utilize their resources and improve the satisfaction level of applications subject to constraints such as communication cost for sharing resources across ESPs. Our framework considers that different ESPs have their own objectives for utilizing their resources, thus resulting in a multi-objective optimization problem. We present an $N$-person \emph{Nash Bargaining Solution} (NBS) for resource allocation and sharing among ESPs with \emph{Pareto} optimality guarantee. Furthermore, we propose a \emph{distributed}, primal-dual algorithm to obtain the NBS by proving that the strong-duality property holds for the resultant resource sharing optimization problem. Using synthetic and real-world data traces, we show numerically that the proposed NBS based framework not only enhances the ability to satisfy applications' resource demands, but also improves utilities of different ESPs.

更新日期：2020-01-14
• arXiv.cs.NI Pub Date : 2020-01-13
Ambuj Varshney; Andreas Soleiman; Thiemo Voigt

Due to extremely low power consumption, backscatter has become the transmission mechanism of choice for battery-free devices that operate on harvested energy. However, a limitation of recent backscatter systems is that the communication range scales with the strength of the ambient carrier signal(ACS). This means that to achieve a long range, a backscatter tag needs to reflect a strong ACS, which in practice means that it needs to be close to an ACS emitter. We present TunnelScatter, a mechanism that overcomes this limitation. TunnelScatter uses a tunnel diode-based radio frequency oscillator to enable transmissions when there is no ACS, and the same oscillator as a reflection amplifier to support backscatter transmissions when the ACS is weak. Our results show that even without an ACS, TunnelScatter is able to transmit through several walls covering a distance of 18 meter while consuming a peak biasing power of 57 microwatts. Based on TunnelScatter, we design battery-free sensor tags, called TunnelTags, that can sense physical phenomena and transmit them using the TunnelScatter mechanism.

更新日期：2020-01-14
• arXiv.cs.NI Pub Date : 2020-01-13
Nikita Korzhitskii; Niklas Carlsson

Internet security and privacy stand on the trustworthiness of public certificates signed by Certificate Authorities (CAs). However, software products do not trust the same CAs and therefore maintain different root stores, each typically containing hundreds of trusted roots capable of issuing "trusted" certificates for any domain. Incidents with misissued certificates motivated Google to implement and enforce Certificate Transparency (CT). CT logs archive certificates in a public, auditable and append-only manner. The adoption of CT changed the trust landscape, with logs too maintaining their own root lists and only logging certificates that chain back to one of their roots. In this paper, we present a first characterization of this emerging CT root store landscape, as well as the tool that we developed for data collection, visualization, and analysis of the root stores. As part of our characterization, we compare the logs' root stores and quantify their changes with respect to both each other and the root stores of major software vendors, look at evolving vendor CT policies, and show that root store mismanagement may be linked to log misbehavior. Finally, we present and discuss the results of a survey that we have sent to the log operators participating in Apple's and Google's CT log programs.

更新日期：2020-01-14
• arXiv.cs.NI Pub Date : 2019-07-06
Colton Powell; Christopher Desiniotis; Behnam Dezfouli

With the rise of the Internet of Things (IoT), fog computing has emerged to help traditional cloud computing in meeting scalability demands. Fog computing makes it possible to fulfill real-time requirements of applications by bringing more processing, storage, and control power geographically closer to end-devices. However, since fog computing is a relatively new field, there is no standard platform for research and development in a realistic environment, and this dramatically inhibits innovation and development of fog-based applications. In response to these challenges, we propose the Fog Development Kit (FDK). By providing high-level interfaces for allocating computing and networking resources, the FDK abstracts the complexities of fog computing from developers and enables the rapid development of fog systems. In addition to supporting application development on a physical deployment, the FDK supports the use of emulation tools (e.g., GNS3 and Mininet) to create realistic environments, allowing fog application prototypes to be built with zero additional costs and enabling seamless portability to a physical infrastructure. Using a physical testbed and various kinds of applications running on it, we verify the operation and study the performance of the FDK. Specifically, we demonstrate that resource allocations are appropriately enforced and guaranteed, even amidst extreme network congestion. We also present simulation-based scalability analysis of the FDK versus the number of switches, the number of end-devices, and the number of fog-devices.

更新日期：2020-01-14
• arXiv.cs.NI Pub Date : 2019-09-20
Maximilian von Tschirschnitz; Marcel Wagner; Marc-Oliver Pahl; Georg Carle

Today, many applications such as production or rescue settings rely on highly accurate entity positioning. Advanced Time of Flight (ToF) based positioning methods provide highaccuracy localization of entities. A key challenge for ToF based positioning is to synchronize the clocks between the participating entities. This paper summarizes and analyzes ToA and TDoA methods with respect to clock error robustness. The focus is on synchronization-less methods, i.e. methods which reduce the infrastructure requirement significantly. We introduce a unified notation to survey and compare the relevant work from literature. Then we apply a clock error model and compute worst case location-accuracy errors. Our analysis reveals a superior error robustness against clock errors for so called Double-Pulse methods when applied to radio based ToF positioning

更新日期：2020-01-14
• arXiv.cs.NI Pub Date : 2019-09-20
Maximilian von Tschirschnitz; Marcel Wagner; Marc-Oliver Pahl; Georg Carle

Many applications require positioning. Time of Flight (ToF) methods calculate distances by measuring the propagation time of signals. We present a novel ToF localization method. Our new approach works infrastructure-less, without pre-defined roles like Anchors or Tags. It generalizes existing synchronization-less Time Difference of Arrival (TDoA) and Time of Arrival (ToA) algorithms. We show how known algorithms can be derived from our new method. A major advantage of our approach is that it provides a comparable or better clock error robustness, i.e. the typical errors of crystal oscillators have negligible impact for TDoA and ToA measurements. We show that our channel usage is for most cases superior compared to the state-of-the art.

更新日期：2020-01-14
• arXiv.cs.NI Pub Date : 2020-01-10
Mao V. Ngo; Hakima Chaouchi; Tie Luo; Tony Q. S. Quek

Advances in deep neural networks (DNN) greatly bolster real-time detection of anomalous IoT data. However, IoT devices can barely afford complex DNN models due to limited computational power and energy supply. While one can offload anomaly detection tasks to the cloud, it incurs long delay and requires large bandwidth when thousands of IoT devices stream data to the cloud concurrently. In this paper, we propose an adaptive anomaly detection approach for hierarchical edge computing (HEC) systems to solve this problem. Specifically, we first construct three anomaly detection DNN models of increasing complexity, and associate them with the three layers of HEC from bottom to top, i.e., IoT devices, edge servers, and cloud. Then, we design an adaptive scheme to select one of the models based on the contextual information extracted from input data, to perform anomaly detection. The selection is formulated as a contextual bandit problem and is characterized by a single-step Markov decision process, with an objective of achieving high detection accuracy and low detection delay simultaneously. We evaluate our proposed approach using a real IoT dataset, and demonstrate that it reduces detection delay by 84% while maintaining almost the same accuracy as compared to offloading detection tasks to the cloud. In addition, our evaluation also shows that it outperforms other baseline schemes.

更新日期：2020-01-13
• arXiv.cs.NI Pub Date : 2020-01-10
Milad Ghaznavi; Elaheh Jalalpour; Bernard Wong; Raouf Boutaba

Traffic in enterprise networks typically traverses a sequence of middleboxes forming a service function chain, or simply a chain. The ability to tolerate failures when they occur along chains is imperative to the availability and reliability of enterprise applications. Service outages due to chain failures severely impact customers and cause significant financial losses. Making a chain fault-tolerant is challenging since, in the case of failures, the state of faulty middleboxes must be correctly and quickly recovered while providing high throughput and low latency. In this paper, we present FTC, a novel system design and protocol for fault-tolerant service function chaining. FTC provides strong consistency with up to f middlebox failures for chains of length f + 1 or longer without requiring dedicated replica nodes. In FTC, state updates caused by packet processing at a middlebox are collected, piggybacked onto the packet, and sent along the chain to be replicated. We implement and evaluate a prototype of FTC. Our results for a chain of 2-5 middleboxes show that FTC improves throughput by 2-3.5x compared with state of the art [50] and adds only 20 us latency overhead per middlebox. In a geo-distributed Cloud deployment, our system recovers lost state in ~271 ms.

更新日期：2020-01-13
• arXiv.cs.NI Pub Date : 2020-01-10
Tran The Anh; Nguyen Cong Luong; Zehui Xiong; Dusit Niyato; Dong In Kim

In this paper, we develop a new framework called blockchain-based Radio Frequency (RF)-powered backscatter cognitive radio network. In the framework, IoT devices as secondary transmitters transmit their sensing data to a secondary gateway by using the RF-powered backscatter cognitive radio technology. The data collected at the gateway is then sent to a blockchain network for further verification, storage and processing. As such, the framework enables the IoT system to simultaneously optimize the spectrum usage and maximize the energy efficiency. Moreover, the framework ensures that the data collected from the IoT devices is verified, stored and processed in a decentralized but in a trusted manner. To achieve the goal, we formulate a stochastic optimization problem for the gateway under the dynamics of the primary channel, the uncertainty of the IoT devices, and the unpredictability of the blockchain environment. In the problem, the gateway jointly decides (i) the time scheduling, i.e., the energy harvesting time, backscatter time, and transmission time, among the IoT devices, (ii) the blockchain network, and (iii) the transaction fee rate to maximize the network throughput while minimizing the cost. To solve the stochastic optimization problem, we then propose to employ, evaluate, and assess the Deep Reinforcement Learning (DRL) with Dueling Double Deep Q-Networks (D3QN) to derive the optimal policy for the gateway. The simulation results clearly show that the proposed solution outperforms the conventional baseline approaches such as the conventional Q-Learning algorithm and non-learning algorithms in terms of network throughput and convergence speed. Furthermore, the proposed solution guarantees that the data is stored in the blockchain network at a reasonable cost.

更新日期：2020-01-13
• arXiv.cs.NI Pub Date : 2019-09-26
Lars Kuger; Aleksandar Ichkov; Petri Mähönen; Ljiljana Simić

Exploiting multi-antenna technologies for robust beamsteering to overcome the effects of blockage and beam misalignment is the key to providing seamless multi-Gbps connectivity in millimeter-wave (mm-wave) networks. In this paper, we present the first large-scale outdoor mm-wave measurement study using a phased antenna array in a typical European town. We systematically collect fine-grained 3D angle-of-arrival (AoA) and angle-of-departure (AoD) data, totaling over 50,000 received signal strength measurements. We study the impact of phased antenna arrays in terms of number of link opportunities, achievable data rate and robustness under small-scale mobility, and compare this against reference horn antenna measurements. Our results show a limited number of 2--4 link opportunities per receiver location, indicating that the mm-wave multipath richness in a European town is surprisingly similar to that of dense urban metropolises. The results for the phased antenna array reveal that significant losses in estimated data rate occur for beam misalignments in the order of the half-power beamwidth, with significant and irregular variations for larger misalignments. By contrast, the loss for horn antennas is monotonically increasing with the misalignment. Our results strongly suggest that the effect of non-ideal phased antenna arrays must be explicitly considered in the design of agile beamsteering algorithms.

更新日期：2020-01-13
• arXiv.cs.NI Pub Date : 2020-01-08
Jérôme Lacan; Emmanuel Lochin

We introduce a XOR-based source routing (XSR) scheme as a novel approach to enable fast forwarding and low-latency communications. XSR uses linear encoding operation to both 1)~build the path labels of unicast and multicast data transfers; 2)~perform fast computational efficient routing decisions compared to standard table lookup procedure without any packet modification all along the path. XSR specifically focuses on decreasing the complexity of forwarding router operations. This allows packet switches (e.g, link-layer switch or router) to perform only simple linear operations over a binary vector label which embeds the path. XSR provides the building blocks to speed up the forwarding plane and can be applied to different data planes such as MPLS or IPv6. Compared to recent approaches based on modular arithmetic, XSR computes the smallest label possible and presents strong scalable properties allowing to be deployed over any kind of core vendor or datacenter networks. At last but not least, the same computed label can be used interchangeably to cross the path forward or reverse in the context of unicast communication.

更新日期：2020-01-10
• arXiv.cs.NI Pub Date : 2019-11-29
Hongzhi Chen; De Mi; Manuel Fuentes; Eduardo Garro; Jose Luis Carcel; Belkacem Mouhouche; Pei Xiao; Rahim Tafazolli

5G New Radio (NR) Release 15 has been specified in June 2018. It introduces numerous changes and potential improvements for physical layer data transmissions, although only point-to-point (PTP) communications are considered. In order to use physical data channels such as the Physical Downlink Shared Channel (PDSCH), it is essential to guarantee a successful transmission of control information via the Physical Downlink Control Channel (PDCCH). Taking into account these two aspects, in this paper, we first analyze the PDCCH processing chain in NR PTP as well as in the state-of-the-art Long Term Evolution (LTE) point-to-multipoint (PTM) solution, i.e., evolved Multimedia Broadcast Multicast Service (eMBMS). Then, via link level simulations, we compare the performance of the two technologies, observing the Bit/Block Error Rate (BER/BLER) for various scenarios. The objective is to identify the performance gap brought by physical layer changes in NR PDCCH as well as provide insightful guidelines on the control channel configuration towards NR PTM scenarios.

更新日期：2020-01-10
• arXiv.cs.NI Pub Date : 2019-11-29
Hongzhi Chen; De Mi; Manuel Fuentes; David Vargas; Eduardo Garro; Jose Luis Carcel; Belkacem Mouhouche; Pei Xiao; Rahim Tafazolli

The first 5G (5th generation wireless systems) New Radio Release-15 was recently completed. However, the specification only considers the use of unicast technologies and the extension to point-to-multipoint (PTM) scenarios is not yet considered. To this end, we first present in this work a technical overview of the state-of-the-art LTE (Long Term Evolution) PTM technology, i.e., eMBMS (evolved Multimedia Broadcast Multicast Services), and investigate the physical layer performance via link-level simulations. Then based on the simulation analysis, we discuss potential improvements for the two current eMBMS solutions, i.e., MBSFN (MBMS over Single Frequency Networks) and SC-PTM (Single-Cell PTM). This work explicitly focus on equipping the current eMBMS solutions with 5G candidate techniques, e.g., multiple antennas and millimeter wave, and its potentials to meet the requirements of next generation PTM transmissions.

更新日期：2020-01-10
• arXiv.cs.NI Pub Date : 2019-11-30
Ran Liu; Sumudu Hasala Marakkalage; Madhushanka Padmal; Thiruketheeswaran Shaganan; Chau Yuen; Yong Liang Guan; U-Xuan Tan

Simultaneous localization and mapping (SLAM) has been extensively researched in past years particularly with regard to range-based or visual-based sensors. Instead of deploying dedicated devices that use visual features, it is more pragmatic to exploit the radio features to achieve this task, due to their ubiquitous nature and the widespread deployment of Wi-Fi wireless network. This paper presents a novel approach for collaborative simultaneous localization and radio fingerprint mapping (C-SLAM-RF) in large unknown indoor environments. The proposed system uses received signal strengths (RSS) from Wi-Fi access points (AP) in the existing infrastructure and pedestrian dead reckoning (PDR) from a smart phone, without a prior knowledge about map or distribution of AP in the environment. We claim a loop closure based on the similarity of the two radio fingerprints. To further improve the performance, we incorporate the turning motion and assign a small uncertainty value to a loop closure if a matched turning is identified. The experiment was done in an area of 130 meters by 70 meters and the results show that our proposed system is capable of estimating the tracks of four users with an accuracy of 0.6 meters with Tango-based PDR and 4.76 meters with a step counter-based PDR.

更新日期：2020-01-10
• arXiv.cs.NI Pub Date : 2019-12-07
Abhaykumar Kumbhar; Hamidullah Binol; Simran Singh; Ismail Guvenc; Kemal Akkaya

UAV enabled communications and networking can enhance wireless connectivity and support emerging services. However, this would require system-level understanding to modify and extend the existing terrestrial network infrastructure. In this paper, we integrate UAVs both as user equipment and base stations into existing LTE-Advanced heterogeneous network (HetNet) and provide system-level insights of this three-tier LTE-Advanced air-ground HetNet (AG-HetNet). This AG-HetNet leverages cell range expansion (CRE), ICIC, 3D beamforming, and enhanced support for UAVs. Using system-level understanding and through brute-force technique and heuristics algorithms, we evaluate the performance of AG-HetNet in terms of fifth percentile spectral efficiency (5pSE) and coverage probability. We compare 5pSE and coverage probability, when aerial base-stations (UABS) are deployed on a fixed hexagonal grid and when their locations are optimized using genetic algorithm (GA) and elitist harmony search algorithm based on genetic algorithm (eHSGA). Our simulation results show the heuristic algorithms outperform the brute-force technique and achieve better peak values of coverage probability and 5pSE. Simulation results also show that trade-off exists between peak values and computation time when using heuristic algorithms. Furthermore, the three-tier hierarchical structuring of FeICIC provides considerably better 5pSE and coverage probability than eICIC.

更新日期：2020-01-10
• arXiv.cs.NI Pub Date : 2019-12-10
Ahmed A Mawgoud; Mohamed Hamed Taha; Nour Eldeen Khalifa

Adhoc wireless sensor network is an architecture of connected nodes, each node has its main elements such as sensors, computation and communications capabilities. Adhoc WSNs restrained energy sources result in a shorter lifetime of the sensor network and inefficient topology. In this paper, a new approach for saving and energy controlling is introduced using quality of service, the main reason is to reduce the nodes energy through discovering the best optimum route that meets quality of service requirements; quality of service technique is used to find the optimum methodology for nodes packets transmission and energy consumption.

更新日期：2020-01-10
• arXiv.cs.NI Pub Date : 2020-01-09
Liang Ma; Ziyao Zhang; Mudhakar Srivatsa

Network tomography, a classic research problem in the realm of network monitoring, refers to the methodology of inferring unmeasured network attributes using selected end-to-end path measurements. In the research community, network tomography is generally investigated under the assumptions of known network topology, correlated path measurements, bounded number of faulty nodes/links, or even special network protocol support. The applicability of network tomography is considerably constrained by these strong assumptions, which therefore frequently position it in the theoretical world. In this regard, we revisit network tomography from the practical perspective by establishing a generic framework that does not rely on any of these assumptions or the types of performance metrics. Given only the end-to-end path performance metrics of sampled node pairs, the proposed framework, NeuTomography, utilizes deep neural network and data augmentation to predict the unmeasured performance metrics via learning non-linear relationships between node pairs and underlying unknown topological/routing properties. In addition, NeuTomography can be employed to reconstruct the original network topology, which is critical to most network planning tasks. Extensive experiments using real network data show that comparing to baseline solutions, NeuTomography can predict network characteristics and reconstruct network topologies with significantly higher accuracy and robustness using only limited measurement data.

更新日期：2020-01-10
Contents have been reproduced by permission of the publishers.

down
wechat
bug