• Computing (IF 2.063) Pub Date : 2020-01-16
Hammad ur Rehman Qaiser, Gao Shu

Abstract Workload uncertainty has been increased with the integration of the Internet of Things to the computing grid i.e. edge computing and cloud data centers. Therefore, efficient resource utilization in cloud data centers become more challenging. Dynamic consolidation of virtual machines on optimal number of processing machines can increase the efficiency of resource utilization in cloud data centers. This process requires the migration of virtual machines from the under-utilized and over-utilized processing machines to other suitable machines. In this work, the problem of efficient replacement of virtual machines is solved using a game theory based well known technique, Nash Equilibrium (NE). We designed a nash equilibrium based dual on two players, over-load manager and under-load manager, to deduce the dominant strategy profiles for various scenarios during consolidation cycles. Dominant strategy profile is the set of strategies where every player has no incentive in deviation, thus leading to equilibrium position. A virtual machines redeployment algorithm, Nash Equilibrium based Virtual Machines Replacement (NE-VMR), has been proposed on the basis of the dominant strategy profiles for efficient consolidation. Experiment results show that NE-VMR is a more efficient server consolidation technique, saved 30% energy and improved 35% quality of service as compared to baselines.

更新日期：2020-01-16
• Computing (IF 2.063) Pub Date : 2020-01-11
Luca Cagliero, Paolo Garza, Giuseppe Attanasio, Elena Baralis

Forecasting the stock markets is among the most popular research challenges in finance. Several quantitative trading systems based on supervised machine learning approaches have been presented in literature. Recently proposed solutions train classification models on historical stock-related datasets. Training data include a variety of features related to different facets (e.g., stock price trends, exchange volumes, price volatility, news and public mood). To increase the accuracy of the predictions, multiple models are often combined together using ensemble methods. However, understanding which models should be combined together and how to effectively handle features related to different facets within different models are still open research questions. In this paper we investigate the use of ensemble methods to combine faceted classification models for supporting stock trading. To this aim, separate classification models are trained on each subset of features belonging to the same facet. They produce trading signals tailored to a specific facet. Signals are then combined together and filtered to generate a unified, multi-faceted recommendation. The experimental validation, performed on different markets and in different conditions, shows that, in many cases, some of the faceted models perform as good as or better than models trained on a mix of different features. An ensemble of the faceted recommendations makes the generated trading signals more profitable yet robust to draw-down periods.

更新日期：2020-01-13
• Computing (IF 2.063) Pub Date : 2020-01-11
Khalid Belhajjame, Noura Faci, Zakaria Maamar, Vanilson Burégio, Edvan Soares, Mahmoud Barhamgi

Abstract Computing-intensive experiments in modern sciences have become increasingly data-driven illustrating perfectly the Big-Data era. These experiments are usually specified and enacted in the form of workflows that would need to manage (i.e., read, write, store, and retrieve) highly-sensitive data like persons’ medical records. We assume for this work that the operations that constitute a workflow are 1-to-1 operations, in the sense that for each input data record they produce a single data record. While there is an active research body on how to protect sensitive data by, for instance, anonymizing datasets, there is a limited number of approaches that would assist scientists with identifying the datasets, generated by the workflows, that need to be anonymized along with setting the anonymization degree that must be met. We present in this paper a solution privacy requirements of datasets used and generated by a workflow execution. We also present a technique for anonymizing workflow data given an anonymity degree.

更新日期：2020-01-13
• Computing (IF 2.063) Pub Date : 2020-01-11
Daniela Renga, Daniele Apiletti, Danilo Giordano, Matteo Nisi, Tao Huang, Yang Zhang, Marco Mellia, Elena Baralis

Abstract Data-driven models are becoming of fundamental importance in electric distribution networks to enable predictive maintenance, to perform effective diagnosis and to reduce related expenditures, with the final goal of improving the electric service efficiency and reliability to the benefit of both the citizens and the grid operators themselves. This paper considers a dataset collected over 6 years in a real-world medium-voltage distribution network by the Supervisory Control And Data Acquisition (SCADA) system. A transparent, exploratory, and exhaustive data-mining workflow, based on data characterisation, time-windowing, association rule mining, and associative classification is proposed and experimentally evaluated to automatically identify correlations and build a prognostic–diagnostic model from the SCADA events occurring before and after specific service interruptions, i.e., network faults. Our results, evaluated by both data-driven quality metrics and domain expert interpretations, highlight the capability to assess the limited predictive capability of the SCADA events for medium-voltage distribution networks, while their effective exploitation for diagnostic purposes is promising.

更新日期：2020-01-13
• Computing (IF 2.063) Pub Date : 2020-01-11
Mahsa Fozuni Shirjini, Saeed Farzi, Amin Nikanjam

Social network analysis has become an important topic for researchers in sociology and computer science. Similarities among individuals form communities as the basic constitutions of social networks. Regarding the importance of communities, community detection is a fundamental step in the study of social networks typically modeled as large-scale graphs. Detecting communities in such large-scale graphs which generally suffers from the curse of dimensionality is the main objective followed in this study. An efficient modularity-based community detection algorithm called MDPCluster is introduced in order to detect communities in large-scale graphs in a timely manner. To address the high dimensionality problem, first, a Louvain-based algorithm is utilized by MDPCluster to distinguish initial communities as super-nodes and then a Modified Discrete Particle Swarm Optimization algorithm, called MDPSO is leveraged to detect communities through maximizing modularity measure. MDPSO discretizes Particle Swarm Optimization using the idea of transmission tendency and also escapes from premature convergence thereby a mutation operator inspired by Genetic Algorithm. To evaluate the proposed method, six standard datasets, i.e., American College Football, Books about US Politics, Amazon Product Co-purchasing, DBLP, GR-QC and HEP-TH have been employed. The first two are known as synthetic datasets whereas the rest are real-world datasets. In comparison to eight state-of-the-art algorithms, i.e., Stationary Genetic Algorithm, Generational Genetic Algorithm, Simulated Annealing-Stationary Genetic Algorithm, Simulated Annealing-Generational Genetic Algorithm, Grivan–Newman, Danon and Label Propagation Algorithm, the results indicate the superiorities of MDCluster in terms of modularity, Normalized Mutual Information and execution time as well.

更新日期：2020-01-13
• Computing (IF 2.063) Pub Date : 2019-06-03
Abdullah Lakhan, Xiaoping Li

Abstract Mobile Cloudlet Computing paradigm (MCC) allows execution of resource-intensive mobile applications using computation cloud resources by exploiting computational offloading method for resource-constrained mobile devices. Whereas, computational offloading needs the mobile application to be partitioned during the execution in the MCC so that total execution cost is minimized. In the MCC, at the run-time network contexts (i.e., network bandwidth, signal strength, latency, etc.) are intermittently changed, and transient failures (due to temporary network connection failure, services busy, database disk out of storage) often occur for a short period of time. Therefore, transient failure aware partitioning of the mobile application at run-time is a challenging task. Since, existing MCC offers computational monolithic services by exploiting heavyweight virtual machines, which incurs with long VM startup time and high overhead, and these cannot meet the requirements of fine-grained microservices applications (e.g., E-healthcare, E-business, 3D-Game, and Augmented Reality). To cope up with prior issues, we propose microservices based mobile cloud platform by exploiting containerization which replaces heavyweight virtual machines, and we propose the application partitioning task assignment (APTA) algorithm which determines application partitioning at run-time and adopts the fault aware (FA) policy to execute microservices applications robustly without interruption in the MCC. Simulation results validate that the proposed microservices mobile cloud platform not only shrinks the setup time of run-time platform but also reduce the energy consumption of nodes and improve the application response time by exploiting APTA and FA to the existing VM based MCC and application partitioning strategies.

更新日期：2020-01-13
• Computing (IF 2.063) Pub Date : 2019-07-24
Manojit Ghose, Sawinder Kaur, Aryabartta Sahu

Cloud computing has emerged to be a promising computing paradigm of the recent time. As the high energy consumption in the cloud system creates several problems, the cloud service providers need to focus on the energy consumption along with providing the required service to their users. Cloud system needs to efficiently execute various real-time applications and designing energy-efficient scheduling algorithms for these applications has gained the research momentum. In this paper, we consider scheduling of real-time tasks for a virtualized cloud system which provides VMs with discrete compute capacities. Depending on the characteristics of the tasks, we divide the problem into four subproblems and propose solution for each subproblem. For the subproblem with arbitrary execution time and deadline of tasks, we use four different methods to cluster the tasks depending on their deadline values. Experiment is performed in CloudSim tool to make a comparison among the clustering methods and results show that the clustering method can be chosen based on the specification of the cloud system. We also made a comparison of our approach with standard energy-efficient scheduling technique both for the synthetic data sets and for the real world trace and we observed an average energy reduction of around $$17\%$$ and $$15\%$$ for the synthetic data sets and for the real world trace respectively (as compared to the baseline policy).

更新日期：2020-01-13
• Computing (IF 2.063) Pub Date : 2019-05-31
Yueqiang Shang

Abstract Based on a fully overlapping domain decomposition technique, a parallel stabilized equal-order finite element method for the steady Stokes equations is presented and studied. In this method, each processor computes a local stabilized finite element solution in its own subdomain by solving a global problem on a global mesh that is locally refined around its subdomain, where the lowest equal-order finite element pairs (continuous piecewise linear, bilinear or trilinear velocity and pressure) are used for the finite element discretization and a pressure-projection-based stabilization method is employed to circumvent the discrete inf–sup condition that is invalid for the used finite element pairs. The parallel stabilized method is unconditionally stable, free of parameter and calculation of derivatives, and is easy to implement based on an existing sequential solver. Optimal error estimates are obtained by the theoretical tool of local a priori error estimates for finite element solutions. Numerical results are also given to verify the theoretical predictions and illustrate the effectiveness of the method.

更新日期：2020-01-13
• Computing (IF 2.063) Pub Date : 2019-06-29
Geomar A. Schreiner, Denio Duarte, Ronaldo dos Santos Mello

Abstract Big Data management has brought several challenges to data-centric applications, like the support to data heterogeneity, rapid data growth and huge data volume. NoSQL databases have been proposed to tackle Big Data challenges by offering horizontal scalability, schemaless data storage and high availability, among others. However, NoSQL databases do not have a standard query language, which bring on a steep learning curve for developers. On the other hand, traditional relational databases and SQL are very popular standards for storing and manipulating critical data, but they are not suitable to Big Data management. One solution for relational-based applications to move to NoSQL databases is to offer a way to access NoSQL databases through SQL instructions. Several approaches have been proposed for translating relational database schemata and operations to equivalent ones in NoSQL databases in order to improve scalability and availability. However, these approaches map relational databases only to a single NoSQL data model and, sometimes, to a specific NoSQL database product. This paper presents a canonical approach, called SQLToKeyNoSQL, that translates relational schemata as well as SQL instructions to equivalent schemata and access methods of any key-oriented NoSQL database. We present the architecture of our layer focusing on the mapping strategies as well as experiments that evaluate the benefits of our approach against some state-of-art baselines.

更新日期：2020-01-13
• Computing (IF 2.063) Pub Date : 2019-06-21
Mingzhi Chen, Daqi Zhu

Due to the unique nature of underwater acoustic communication, data collection from the Underwater Acoustic Sensor Networks (UASNs) is a challenging problem. It has been reported that data collection from the UASNs with the assistance of the autonomous underwater vehicles (AUVs) will be more convenient. The AUV needs to schedule a tour to contact all sensors once, which is a variant of the Traveling Salesman Problem. A hybrid optimization algorithm is proposed for the solution of the problem. The algorithm combines the quantum-behaved particle swarm optimization and improved ant colony optimization algorithms. It is an algorithm with quadratic complexity, which can yield approximate but satisfactory results for the problem. Simulation experiments are carried out to demonstrate the efficiency of the algorithm. Compared to the Self-Organizing Map based (SOM-based) algorithm, it not only plans a shorter tour, but also shortens the distance from the sensor to its closest waypoint. Therefore, the algorithm can reduce the energy required for data transmission since the communication distance drops, and the service life of the sensor can be extended.

更新日期：2020-01-13
• Computing (IF 2.063) Pub Date : 2020-01-09
Khaled Fawagreh, Mohamed Medhat Gaber

Abstract In predictive healthcare data analytics, high accuracy is both vital and paramount as low accuracy can lead to misdiagnosis, which is known to cause serious health consequences or death. Fast prediction is also considered an important desideratum particularly for machines and mobile devices with limited memory and processing power. For real-time health care analytics applications, particularly the ones that run on mobile devices, such traits (high accuracy and fast prediction) are highly desirable. In this paper, we propose to use an ensemble regression technique based on CLUB-DRF, which is a pruned Random Forest that possesses these features. The speed and accuracy of the method have been demonstrated by an experimental study on three medical data sets of three different diseases.

更新日期：2020-01-09
• Computing (IF 2.063) Pub Date : 2020-01-09
Munish Bhatia, Sandeep K. Sood, Simranpreet Kaur

Load scheduling has been a major challenge in distributed fog computing environments for meeting the demands of decision-making in real-time. This research proposes an quantumized approach for scheduling heterogeneous tasks in fog computing-based applications. Specifically, a node-specific metric is defined in terms of Node Computing Index for estimating the computational capacity of fog computing nodes. Moreover, QCI-Neural Network Model is proposed for predicting the optimal fog node for handling the heterogeneous task in real-time. In order to validate the proposed approach, experimental simulations were performed in different cases using 5, 10, 15, 20 fog nodes to schedule heterogeneous tasks obtained from online Google Job datasets. A comparative analysis was performed with state-of-the-art scheduling models like Heterogeneous Earliest Finish Time, Min–Max, and Round Robin were used for comparative analysis to determine performance enhancement. Better performance was acquired for the proposed approach with execution delay of 30.01s for 20 nodes. In addition to this, high values of statistical estimators like specificity (90.99%), sensitivity (89.76%), precision (91.15%) and coverage (94.56%) were registered to depict the enhancement in overall system performance.

更新日期：2020-01-09
• Computing (IF 2.063) Pub Date : 2020-01-09
Liuyan Chen, Lukasz Golab

In computational linguistics, binary sentiment analysis methods have been proposed to predict whether a document expresses a positive or a negative opinion. In this paper, we study a unique research problem—identifying environmental stimuli that contribute to different moods (mood triggers). Our analysis is enabled by an anonymous micro-journalling dataset, containing over 700,000 short journals from over 67,000 writers and their self-reported moods at the time of writing. We first build a multinomial logistic regression model to predict the mood (e.g., happy, sad, tired, productive) associated with a micro-journal. We then examine the model to identify predictive words and word trigrams associated with various moods. Our study offers new data-driven insights into public well-being.

更新日期：2020-01-09
• Computing (IF 2.063) Pub Date : 2020-01-09
Samar Haytamy, Fatma Omara

The service composition problem in Cloud computing is formulated as a multiple criteria decision-making problem. Due to the extensive search space, Cloud service composition is addressed as an NP-hard problem. In addition, it is a long term based and economically driven. Building an accurate economic model for service composition has great attention to interest and importance for the Cloud consumer. A deep learning based service composition (DLSC) framework has been proposed in this paper. The proposed DLSC framework is considered an amalgamation between the deep learning long short term memory (LSTM) network and particle swarm optimization (PSO) algorithm. The LSTM network is applied to accurately predict the Cloud QoS provisioned values, and the output of LSTM network is fed to PSO algorithm to compose the best Cloud providers to contract with them for composing the needed services to minimize the consumer cost function. The proposed DLSC framework has been implemented using a real QoS dataset. According to the comparative results, it is found that the performance of the proposed framework outperforms the existing models with respect to the predictive accuracy and composition accuracy.

更新日期：2020-01-09
• Computing (IF 2.063) Pub Date : 2020-01-08

The overhead of data transfer to the GPU poses a bottleneck for the performance of CUDA programs. The accurate prediction of data transfer time is quite effective in improving the performance of GPU analytical modeling, the prediction accuracy of kernel performance, and the composition of the CPU with the GPU for solving computational problems. For estimating the data transfer time between the CPU and the GPU, the current study employs three machine learning-based models and a new analytical model called $$\lambda$$-Model. These models run on four GPUs from different NVIDIA architectures and their performance is compared. The practical results show that the $$\lambda$$-Model is able to anticipate the transmission of large-sized data with a maximum error of 1.643%, which offers better performance than that of machine learning methods. As for the transmission of small-sized data, machine learning-based methods provide better performance and a predicted data transfer time with a maximum error of 4.52%. Consequently, the current study recommends a hybrid model, that is, the $$\lambda$$-Model for large-sized data and machine learning tools for small-sized data.

更新日期：2020-01-09
• Computing (IF 2.063) Pub Date : 2020-01-08
Emir Ugljanin, Ejub Kajan, Zakaria Maamar, Muhammad Asim, Vanilson Burégio

Abstract This paper presents an approach for allowing the transparent co-existence of citizens and IoT-compliant things in smart cities. Considering the particularities of each, the approach embraces two concepts known as social machines and data artifacts. On the one hand, social machines act as wrappers over applications (e.g., social media) that allow citizens and things to have an active role in their cities by reporting events of common interest to the population, for example. On the other hand, data artifacts abstract citizens’ and things’ contributions in terms of who has done what, when, where, and why. For successful smart cities, the approach relies on the willingness and engagement of both citizens and things. Smart cities’ initiatives are embraced and not imposed. A case study along with a testbed that uses a real dataset about car-traffic accident in a state in Brazil demonstrate the technical doability and scalability of the approach. The evaluation consists of assessing the time to drill into the different generated data artifacts prior to generating useful details for decision makers.

更新日期：2020-01-08
• Computing (IF 2.063) Pub Date : 2020-01-04
Xuewen Xia, Yichao Tang, Bo Wei, Yinglong Zhang, Ling Gui, Xiong Li

To satisfy the distinct requirements of different evolutionary stages, a dynamic multi-swarm global particle swarm optimization (DMS-GPSO) is proposed in this paper. In DMS-GPSO, the entire evolutionary process is segmented as an initial stage and a later stage. In the initial stage, the entire population is divided into a global sub-swarm and multiple dynamic multiple sub-swarms. During the evolutionary process, the global sub-swarm focuses on the exploitation under the guidance of the optimal particle in the entire population, while the dynamic multiple sub-swarms pour more attention on the exploration under the guidance of the neighbor’s best-so-far position. Moreover, a store operator and a reset operator applied in the global sub-swarm are used to save computational resource and increase the population diversity, respectively. At the later stage, some elite particles stored in an archive are combined with the DMS sub-swarms as a single population to search for optimal solutions, intending to enhance the exploitation ability. The effect of the new introduced strategies is verified by extensive experiments. Besides, the comparison results among DMS-GPSO and other 9 peer algorithms on CEC2013 and CEC2017 test suites demonstrate that DMS-GPSO can effectively avoid the premature convergence when solving multimodal problems, and yield more favorable performance in complex problems.

更新日期：2020-01-04
• Computing (IF 2.063) Pub Date : 2020-01-02
Sridhar Chimalakonda, Kesav V. Nori

Abstract Rapid advances in education domain demand the design and customization of educational technologies for a large scale and variety of evolving requirements. Here, scale is the number of systems to be developed and variety stems from a diversified range of instructional designs such as varied goals, processes, content, teaching styles, learning styles and, also for eLearning Systems for 22 Indian Languages and variants. In this paper, we present a family of software product lines as an approach to address this challenge of modeling a family of instructional designs as well as a family of eLearning Systems and demonstrate it for the case of adult literacy in India (287 million learners). We present a multi-level product line that connects product lines at multiple levels of granularity in education domain. We then detail two concrete product lines (http://rice.iiit.ac.in), one that generates instructional design editors and two, which generates a family of eLearning Systems based on flexible instructional designs. Finally, we demonstrate our approach by generating eLearning Systems for Hindi and Telugu languages, which led to significant cost savings of 29 person-months for 9 eLearning Systems.

更新日期：2020-01-04
• Computing (IF 2.063) Pub Date : 2019-12-23
Yuming Li, Pin Ni, Victor Chang

The role of the stock market across the overall financial market is indispensable. The way to acquire practical trading signals in the transaction process to maximize the benefits is a problem that has been studied for a long time. This paper put forward a theory of deep reinforcement learning in the stock trading decisions and stock price prediction, the reliability and availability of the model are proved by experimental data, and the model is compared with the traditional model to prove its advantages. From the point of view of stock market forecasting and intelligent decision-making mechanism, this paper proves the feasibility of deep reinforcement learning in financial markets and the credibility and advantages of strategic decision-making.

更新日期：2020-01-04
• Computing (IF 2.063) Pub Date : 2019-12-09
David Castells-Graells, Christopher Salahub, Evangelos Pournaras

Abstract Bike usage in Smart Cities is paramount for sustainable urban development: cycling promotes healthier lifestyles, lowers energy consumption, lowers carbon emissions, and reduces urban traffic. However, the expansion and increased use of bike infrastructure has been accompanied by a glut of bike accidents, a trend jeopardizing the urban bike movement. This paper leverages data from a diverse spectrum of sources to characterise geolocated bike accident severity and, ultimately, study cycling risk and discomfort. Kernel density estimation generates a continuous, empirical, spatial risk estimate which is mapped in a case study of Zürich city. The roles of weather, time, accident type, and severity are illustrated. A predominance of self-caused accidents motivates an open-source software artifact for personalized route recommendations. This software is used to collect open baseline route data that are compared with alternative routes minimizing risk and discomfort. These contributions have the potential to provide invaluable infrastructure improvement insights to urban planners, and may also improve the awareness of risk in the urban environment among experienced and novice cyclists alike.

更新日期：2020-01-04
• Computing (IF 2.063) Pub Date : 2019-01-24
Júlio Mendonça, Ermeson Andrade, Ricardo Lima

Abstract Energy consumption, execution time, and availability are common terms in discussions on application development for mobile devices. Mobile applications executing in a mobile cloud computing (MCC) environment must consider several issues, such as Internet connections problems and CPU performance. Misconceptions during the design phase can have a significant impact on costs and time-to-market, or even make the application development unfeasible. Anticipating the best configuration for each type of application is a challenge that many developers are not prepared to tackle. In this work, we propose models to rapidly estimate execution time, availability, and energy consumption of mobile applications executing in an MCC environment. We defined a methodology to create and validate Deterministic and Stochastic Petri net (DSPN) models to evaluate these three critical metrics. The DSPNs results were compared with results obtained through experiments performed on a testbed environment. We analyzed an image processing application, regarding connections type (WLAN, WiFi, and 3G), servers type (MCC or cloudlet), and functionalities performance. Our numerical analyses indicate, for instance, that the use of a cloudlet significantly improves performance and energy efficiency. Besides, the baseline scenario took us one month to implement, while modeling and evaluation the three scenarios required less than one day. In this way, our DSPN models represent a powerful tool for mobile developers to plan efficient and cost-effective mobile applications. They allow rapidly assess execution time, availability, and energy consumption metrics to improve the quality of mobile applications.

更新日期：2020-01-04
• Computing (IF 2.063) Pub Date : 2019-03-29
Dong-Oh Kim, Hong-Yeon Kim, Young-Kyun Kim, Jeong-Joon Kim

Replication has been widely used to ensure the data availability in a distributed file system. In recent years, erasure coding (EC) has been adopted to overcome the problem of space efficiency in Replication. However, EC has various performance degrading factors such as parity calculation and degraded input/output. In particular, the recovery performance of EC is degraded because of various factors when the distributed file systems become large. Nonetheless, few studies have been conducted to improve the recovery performance. Thus, this paper proposes an efficient parallel recovery technique in an EC-based distributed file system. We describe the contention avoidance method, chunk allocation method, and asynchronous recovery method, to improve the parallel recovery performance. The contention avoidance method can minimize the contention for resources. The chunk allocation method and asynchronous recovery method can increase the efficiency of the parallel recovery. Finally, we verify that when the proposed parallel recovery technique in this paper is applied to actual distributed file systems, its recovery performance is improved by 263% compared to that of existing methods in the performance evaluation.

更新日期：2020-01-04
• Computing (IF 2.063) Pub Date : 2019-04-12
Luise Pufahl, Mathias Weske

Organizations strive for efficiency in their business processes by process improvement and automation. Business process management (BPM) supports these efforts by capturing business processes in process models serving as blueprint for a number of process instances. In BPM, process instances are typically considered running independently of each other. However, batch processing–the collectively execution of several instances at specific process activities—is a common phenomenon in operational processes to reduce cost or time. Currently, batch processing is organized manually or hard-coded in software. For allowing stakeholders to explicitly represent their batch configurations in process models and their automatic execution, this paper provides a concept for batch activities and describes the corresponding execution semantics. The batch activity concept is evaluated in a two-step approach: a prototypical implementation in an existing BPM System proves its feasibility. Additionally, batch activities are applied to different use cases in a simulated environment. Its application implies cost-savings when a suitable batch configuration is selected. The batch activity concept contributes to practice by allowing the specification of batch work in process models and their automatic execution, and to research by extending the existing process modeling concepts.

更新日期：2020-01-04
• Computing (IF 2.063) Pub Date : 2019-02-27
Jun Gao, Yi Lu Murphey, Honghui Zhu

Abstract Side swipe accidents occur primarily when drivers attempt an improper lane change, drift out of lane, or the vehicle loses lateral traction. In this paper, a fusion approach is introduced that utilizes multiple differing modality data, such as video data, GPS data, wheel odometry data, potentially IMU data collected from data logging device (DL1 MK3) for detecting driver’s behavior of lane changing by using a novel dimensionality reduction model, collaborative representation optimized projection classifier (CROPC). The criterion of CROPC is maximizing the collaborative representation based between-class scatter and minimizing the collaborative representation based within-class scatter in the transformed space simultaneously. For lane change detection, both feature-level fusion and decision-level fusion are considered. In the feature-level fusion, features generated from multiple differing modality data are merged before classification while in the decision-level fusion, an improved Dempster–Shafer theory based on correlation coefficient, DST-CC is presented to combine the classification outcomes from two classifiers, each corresponding to one kind of the data. The results indicate that the introduced fusion approach using a CROPC performs significantly better in terms of detection accuracy, in comparison to other state-of-the-art classifiers.

更新日期：2020-01-04
• Computing (IF 2.063) Pub Date : 2019-03-27
Andreas Kosmatopoulos, Anastasios Gounaris, Kostas Tsichlas

Abstract Over the past few years, there has been a rapid increase of data originating from evolving networks such as social networks, sensor networks and others. A major challenge that arises when handling such networks and their respective graphs is the ability to issue a historical query on their data, that is, a query that is concerned with the state of the graph at previous time instances. While there has been a number of works that index the historical data in a time-centric manner (i.e. according to the time instance an update event occurs), in this work, we focus on the less-explored vertex-centric storage approach (i.e. according to the entity in which an update event occurs). We demonstrate that the design choices for a vertex-centric model are not trivial, by proposing two different modelling and storage models that leverage NoSQL technology and investigating their tradeoffs. More specifically, we experimentally evaluate the two models and show that under certain cases, their relative performance can differ by several times. Finally, we provide evidence that simple baseline and non-NoSQL solutions are slower by up to an order of magnitude.

更新日期：2020-01-04
• Computing (IF 2.063) Pub Date : 2018-12-13
Sankar Mukherjee, Daya Sagar Gupta, G. P. Biswas

With the rapid increase in the internet technologies, Vehicular Ad hoc Networks (VANETs) are identified as a crucial primitive for the vehicular communication in which the moving vehicles are treated as nodes to form a mobile network. To ameliorate the efficiency and traffic security of the communication, a VANET can wirelessly circulate the traffic information and status to the participating vehicles (nodes). Before deploying a VANET, a security and privacy mechanism must be implemented to assure the secure communication. Due to this issue, a number of conditional privacy-preserving authentication schemes are proposed in the literature to guarantee the mutual authentication and privacy protection. However, most of these schemes use the Diffie–Hellman (DH) problems to secure the communication. Note that, these DH-type problems can be solved in polynomial-time in the presence of new modern technologies like quantum computers. Therefore, to remove these difficulties, we motivated to attempt a non-DH type conditional privacy-preserving authentication scheme which can resist the quantum computers. In this paper, we developed the first lattice-based conditional privacy-preserving authentication (LB-CPPA) protocol for VANETs. A random oracle model is used to analyze the security of proposed protocol. The security of our LB-CPPA scheme is based on the complexity of lattice problems. By security analysis, we show that our proposal endorses the message integrity and authentication as well as the privacy preservation at the same time. A security comparison of our claim is also done. Further, we analyze the performance of the proposed scheme and compare it with the DH-type schemes.

更新日期：2020-01-04
• Computing (IF 2.063) Pub Date : 2019-05-02
Enver Ever, Purav Shah, Leonardo Mostarda, Fredrick Omondi, Orhan Gemikonakli

Wireless sensor networks (WSNs) form a large part of the ecosystem of the Internet of Things (IoT), hence they have numerous application domains with varying performance and availability requirements. Limited resources that include processing capability, queue capacity, and available energy in addition to frequent node and link failures degrade the performance and availability of these networks. In an attempt to efficiently utilise the limited resources and to maintain the reliable network with efficient data transmission; it is common to select a clustering approach, where a cluster head is selected among the diverse IoT devices. This study presents the stochastic performance as well as the energy evaluation model for WSNs that have both node and link failures. The model developed considers an integrated performance and availability approach. Various duty cycling schemes within the medium-access control of the WSNs are also considered to incorporate the impact of sleeping/idle states that are presented using analytical modeling. The results presented using the proposed analytical models show the effects of factors such as failures, various queue capacities and system scalability. The analytical results presented are in very good agreement with simulation results and also present an important fact that the proposed models are very useful for identification of thresholds between WSN system characteristics.

更新日期：2020-01-04
• Computing (IF 2.063) Pub Date : 2019-02-18
Georgios Sakellariou, Anastasios Gounaris

Abstract The significance of data analytics has been acknowledged in many scientific and business domains. However, the required processing power and memory capacity is a prohibiting factor for performing data analytics on proprietary platforms. An obvious solution is the outsourcing of data analytics to cloud storage and cloud computing providers but this entails that privacy and security issues are raised, given the fact that data can be valuable and/or personal. The aim of this paper is the development of a server-side k-means algorithm over encrypted data using homomorphic encryption in order to overcome both the lack of resources of the data owner and the security concerns. Current solutions that deal with homomorphic encryption impose a heavy load on the side of the data owner; this limitation is now addressed in this work. More specifically, in this paper, we present a framework for the implementation of an homomorphic version of k-means, we discuss the capabilities of the current state-of-the-art homomorphic encryption schemes, and we propose a novel approach to server-side computation of k-means assuming a new adversary model tailored to modern settings. We instantiate our framework in two different versions in terms of operation assignment each coming in three flavors of operation implementation. All alternatives are evaluated thoroughly using both real experiments and analytic cost models.

更新日期：2020-01-04
• Computing (IF 2.063) Pub Date : 2019-05-09
Jun Jiang, Liangcai Zeng, Bin Chen, Yang Lu, Wei Xiong

Abstract The traditional calibration paradigms fail to give reliable and accurate results in case of low-quality 2D planar calibration plates. In this paper, an active method is proposed by employing an LCD panel for camera calibration. This method automatically generates a sequence of virtual patterns in different views by pre-defined transforms without manually manipulation or other equipment’s help to move the patterns. Then, the projections of virtual patterns are captured by a camera. The homography between projective patterns in virtual world coordinate and their images is calculated directly to obtain the camera parameters. Experimental results show that the calibration error is 0.018 pixel in terms of mean re-projection error by using 18 virtual patterns, which is significantly less than the state-of-the-art methods. The proposed scheme makes camera calibration flexible and easy to use.

更新日期：2020-01-04
• Computing (IF 2.063) Pub Date : 2019-11-30
Jyothi Kunchala, Jian Yu, Sira Yongchareon, Chengfei Liu

Artifact-centric approach to business process modeling has received considerable attention for elevating data logic to the same level as the process flow logic. With the emergence of this modeling paradigm, several recent works have focused on synthesizing the indispensable lifecycles of key business entities called artifacts from the standalone activity-centric processes. However, synthesizing artifact lifecycles from the inter-organizational business processes (IOBP) is challenging, as the artifacts and states are shared among two or more collaborating processes. Thus, unlike a standalone process, the synthesis of artifact lifecycles from an IOBP require the process interactions to be captured by preserving the dependencies between the involved artifacts and states in the resulting lifecycles. Therefore, in this paper, we propose an automated approach that aims at merging the collaborating processes of an IOBP in order to support the synthesis of artifact lifecycles from an IOBP. The proposed approach is comprised of algorithms that combine the nodes of collaborating processes to generate an integrated process that can be used to synthesize the artifact lifecycles pertinent to an IOBP. We demonstrate the proposed approach using an e-business process scenario and the validity is proved using theorems and a prototype implementation.

更新日期：2020-01-04
• Computing (IF 2.063) Pub Date : 2019-11-27
Yuan-Ko Huang

Currently, many of the processing techniques for the location-based queries provide information of a single type of spatial objects, based on their spatial closeness to the query object. However, in real-life applications the user may be interested in obtaining information about different types of objects, in terms of their quality, cost, and neighboring relationship. We term the different types of objects with better quality and closer to each other the Neighboring skyline set (or NS set). Three new types of location-based queries, the Distance-based neighboring skyline query (Dist-NS query), the Cost-based neighboring skyline query (Cost-NS query), and the Budget-based neighboring skyline query (BGT-NS query), are presented to determine the NS sets according to user’s specific requirement. A R-tree-based index, the $$R^{a,c}$$-tree, is first designed to manage each type of objects with their locations, attributes, and costs. Then, a simultaneous traversal of the $$R^{a,c}$$-trees built on different types of objects is employed with several pruning criteria to prune the non-qualifying object sets as early as possible, so as to improve the query performance. Extensive experiments using the synthetic dataset demonstrate the efficiency and the effectiveness of the proposed algorithms.

更新日期：2020-01-04
• Computing (IF 2.063) Pub Date : 2019-11-25
Jinbao Xie, Yongjin Hou, Yujing Wang, Qingyan Wang, Baiwei Li, Shiwei Lv, Yury I. Vorotnitsky

Abstract Owing to the uneven distribution of key features in Chinese texts, key features play different roles in text recognition in Chinese text classification tasks. We propose a feature-enhanced fusion model based on attention mechanism for Chinese text classification, a long short-term memory (LSTM) network, a convolutional neural network (CNN), and a feature-difference enhancement attention algorithm model. The Chinese text is digitized into a vector form containing certain semantic context information into the embedding layer to train and test the neural network by preprocessing. The feature-enhanced fusion model is implemented by double-layer LSTM and CNN modules to enhance the fusion of text features extracted from the attention mechanism for classifying the classifiers. The feature-difference enhancement attention algorithm model not only adds more weight to important text features but also strengthens the differences between them and other text features. This operation can further improves the effect of important features on Chinese text recognition. The two models are classified by the softmax function. The text classification experiments are conducted based on the Chinese text corpus. The experimental results show that compared with the contrast model, the proposed algorithm can significantly improve the recognition ability of Chinese text features.

更新日期：2020-01-04
• Computing (IF 2.063) Pub Date : 2019-11-21
Faris A. Almalki, Marios C. Angelides

Abstract Having reliable telecommunication systems in the immediate aftermath of a catastrophic event makes a huge difference in the combined effort by local authorities, local fire and police departments, and rescue teams to save lives. This paper proposes a physical model that links base stations that are still operational with aerial platforms and then uses a machine learning framework to evolve ground-to-air propagation model for such an ad hoc network. Such a physical model is quick and easy to deploy and the underlying air-to-ground (ATG) propagation models are both resilient and scalable and may use a wide range of link budget, grade of service (GoS), and quality of service (QoS) parameters to optimise their performance and in turn the effectiveness of the physical model. The prediction results of a simulated deployment of such a physical model and the evolved propagation model in an ad hoc network offers much promise in restoring communication links during emergency relief operations.

更新日期：2020-01-04
• Computing (IF 2.063) Pub Date : 2019-11-18
Kostas Kolomvatsos, Christos Anagnostopoulos

Abstract Data management at the edge of the network can increase the performance of applications as the processing is realized close to end users limiting the observed latency in the provision of responses. A typical data processing involves the execution of queries/tasks defined by users or applications asking for responses in the form of analytics. Query/task execution can be realized at the edge nodes that can undertake the responsibility of delivering the desired analytics to the interested users or applications. In this paper, we deal with the problem of allocating queries to a number of edge nodes. The aim is to support the goal of eliminating further the latency by allocating queries to nodes that exhibit a low load and high processing speed, thus, they can respond in the minimum time. Before any allocation, we propose a method for estimating the computational burden that a query/task will add to a node and, afterwards, we proceed with the final assignment. The allocation is concluded by the assistance of an ensemble similarity scheme responsible to deliver the complexity class for each query/task and a probabilistic decision making model. The proposed scheme matches the characteristics of the incoming queries and edge nodes trying to conclude the optimal allocation. We discuss our mechanism and through a large set of simulations and the adoption of benchmarking queries, we reveal the potentials of the proposed model supported by numerical results.

更新日期：2020-01-04
• Computing (IF 2.063) Pub Date : 2019-11-12
Wentao Liu, Weipeng Jing, Yang Li

Consumers are increasingly influenced by product reviews when purchasing goods or services. At the same time, deceptive reviews usually mislead users. It is inefficient and inaccurate to manually identify deceptive reviews in massive reviews. Therefore, automatically identifying deceptive reviews has become a research trend. Most of existing methods are less effective since they are lack of deeply understanding of reviews. We propose a neural network method with bidirectional long short-term memory (BiLSTM) and feature combination to learn the representation of deceptive reviews. We conduct a large amount of experiments and demonstrate the effectiveness of our proposed method. Specifically, in the mixed-domain detection experiment, the results prove that our model is effective by making comparisons with other neural network-based methods. BiLSTM gives more than 3% improvement in F1 score compared with the most advanced neural network method. Since feature selection plays an important role in this direction, we combine features to improve the performance. Then we get 87.6% F1 value which outperforms the state-of-the-art method. Moreover, in the cross-domain detection experiment, our method achieves 82.4% F1 value which is about 6% higher than the state-of-the-art method on restaurant domain, and it is also robust on doctor domain.

更新日期：2020-01-04
• Computing (IF 2.063) Pub Date : 2019-11-12
Shirui Wang, Wenan Zhou, Chao Jiang

The representational basis for downstream natural language processing tasks is word embeddings, which capture lexical semantics in numerical form to handle the abstract semantic concept of words. Recently, the word embeddings approaches, represented by deep learning, has attracted extensive attention and widely used in many tasks, such as text classification, knowledge mining, question-answering, smart Internet of Things systems and so on. These neural networks-based models are based on the distributed hypothesis while the semantic association between words can be efficiently calculated in low-dimensional space. However, the expressed semantics of most models are constrained by the context distribution of each word in the corpus while the logic and common knowledge are not better utilized. Therefore, how to use the massive multi-source data to better represent natural language and world knowledge still need to be explored. In this paper, we introduce the recent advances of neural networks-based word embeddings with their technical features, summarizing the key challenges and existing solutions, and further give a future outlook on the research and application.

更新日期：2020-01-04
Contents have been reproduced by permission of the publishers.

down
wechat
bug