• Computing (IF 2.063) Pub Date : 2020-02-13
Nandhini Sivasubramanian, Gunaseelan Konganathan

Abstract A novel semi fragile watermarking technique using integer wavelet transform (IWT) and discrete cosine transform (DCT) for tamper detection and recovery to enhance enterprise multimedia security is proposed. In this paper, two types of watermark are generated which are namely the authentication watermark and recovery Watermark. The Watermarked Image is formed by embedding the authentication watermark which is generated using the proposed IWT based authentication watermark generating Technique. Next, the watermarked image is divided into 2 × 2 blocks and a 10 bit recovery watermark is generated from each of the 2 × 2 blocks using the proposed DCT based recovery watermark generation technique. The generated recovery watermark is used to form the recovery tag which is sent along with the watermarked image to the receiver. At the receiver side, the proposed tamper detection technique is used for verifying the authenticity and identifying the attacks in the watermarked image. If the manipulations are identified as malicious, then the tampered parts in the received image are recovered using the proposed tamper recovery technique. The performance of the proposed tamper detection and recovery technique was tested for different types of incidental/content preserving manipulations and various types of malicious attacks. When compared to the existing semi fragile watermarking techniques, the proposed embedding technique produced a better PSNR (Peak Signal to noise ratio) for various watermarked images. Also, the proposed tamper detection and recovery technique were able to localize the malicious attacks and subsequently recover the tampered parts when compared to the existing techniques. The increased performance of the proposed tamper detection and recovery technique was due to the usage of both Normalized Hamming Similarity (NHS) and tamper detection map in the proposed tamper detection technique to identify manipulations and due to the generation of both the authentication and recovery watermark.

更新日期：2020-02-13
• Computing (IF 2.063) Pub Date : 2019-08-01

Abstract Over the last decade, the increased use of social media has led to an increase in hateful activities in social networks. Hate speech is one of the most dangerous of these activities, so users have to protect themselves from these activities from YouTube, Facebook, Twitter etc. This paper introduces a method for using a hybrid of natural language processing and with machine learning technique to predict hate speech from social media websites. After hate speech is collected, steaming, token splitting, character removal and inflection elimination is performed before performing hate speech recognition process. After that collected data is examined using a killer natural language processing optimization ensemble deep learning approach (KNLPEDNN). This method detects hate speech on social media websites using an effective learning process that classifies the text into neutral, offensive and hate language. The performance of the system is then evaluated using overall accuracy, f-score, precision and recall metrics. The system attained minimum deviations mean square error − 0.019, Cross Entropy Loss − 0.015 and Logarithmic loss L-0.0238 and 98.71% accuracy.

更新日期：2020-02-10
• Computing (IF 2.063) Pub Date : 2019-04-27
Karoline Saatkamp, Uwe Breitenbücher, Oliver Kopp, Frank Leymann

Abstract For automating the deployment of applications in cloud environments, a variety of technologies have been developed in recent years. These technologies enable to specify the desired deployment in the form of deployment models that can be automatically processed by a provisioning engine. However, the deployment across several clouds increases the complexity of the provisioning. Using one deployment model with a single provisioning engine, which orchestrates the deployment across the clouds, forces the providers to expose low-level APIs to ensure the accessibility from outside. In this paper, we present an extended version of the split and match method to facilitate the division of deployment models to multiple models which can be deployed by each provider separately. The goal of this approach is to reduce the information and APIs which have to be exposed to the outside. We present a formalization and algorithms to automate the method. Moreover, we validate the practical feasibility by a prototype based on the TOSCA standard and the OpenTOSCA ecosystem.

更新日期：2020-02-10
• Computing (IF 2.063) Pub Date : 2019-09-13
Wided Abidi, Tahar Ezzedine

Abstract The major challenge in wireless sensor networks is to reduce energy consumption and increase the lifetime of the network. In this paper, we propose an effective protocol to address this issue. In fact, our proposed protocol is based on first inserting heterogeneous nodes in the network, then dividing the network to regions. And finally, the selection of the Cluster Head (CH) is carried out using the remaining energy of the node, the number of neighbors within cluster range and the distance between the node and the CH. Simulation results confirm that our proposed protocol is an energy efficient protocol, which has good results in prolonging the lifetime of the network and saving energy consumption.

更新日期：2020-02-10
• Computing (IF 2.063) Pub Date : 2019-07-05

Abstract Clouds are becoming an effective platform for scientific workflow applications. In the meantime, Cloud computing structures are moving towards being more heterogeneous. In heterogeneous service-oriented systems, managing the reliability of resources (e.g., processors and communication networks) is widely identified as a critical issue due to processor and communication failures affecting user quality of service requirements. Therefore, these types of failures should be taken into account when scheduling algorithms. The present paper proposes a scheduling approach which includes four algorithms for minimizing the workflow execution cost while also meeting the user-specified deadline and reliability. To meet the application’s requirements, the first algorithm partitions the workflow into several clusters based on a critical parent called CbCP. After that, the resource assignment algorithm, consisting of reliability and deadline distribution methods, satisfies the application’s constraints. Experimental outcomes on various workflows, generated at different scales in real and random fashion, demonstrate that the proposed heuristics meet the deadline and reliability. This ensures the minimal cost when performing a similar quality of service as opposed to the performance of the state-of-the-art DRR and QFEC+ algorithms.

更新日期：2020-02-10
• Computing (IF 2.063) Pub Date : 2019-07-30
Caihua Liu, Patrick Nitschke, Susan P. Williams, Didar Zowghi

Abstract The Internet of Things (IoT) is driving technological change and the development of new products and services that rely heavily on the quality of the data collected by IoT devices. There is a large body of research on data quality management and improvement in IoT, however, to date a systematic review of data quality measurement in IoT is not available. This paper presents a systematic literature review (SLR) about data quality in IoT from the emergence of the term IoT in 1999 to 2018. We reviewed and analyzed 45 empirical studies to identify research themes on data quality in IoT. Based on this analysis we have established the links between data quality dimensions, manifestations of data quality problems, and methods utilized to measure data quality. The findings of this SLR suggest new research areas for further investigation and identify implications for practitioners in defining and measuring data quality in IoT.

更新日期：2020-02-10
• Computing (IF 2.063) Pub Date : 2019-02-13
Claire Prudhomme, Timo Homburg, Jean-Jacques Ponciano, Frank Boochs, Christophe Cruz, Ana-Maria Roxin

Abstract In the context of disaster management, geospatial information plays a crucial role in the decision-making process to protect and save the population. Gathering a maximum of information from different sources to oversee the current situation is a complex task due to the diversity of data formats and structures. Although several approaches have been designed to integrate data from different sources into an ontology, they mainly require background knowledge of the data. However, non-standard data set schema (NSDS) of relational geospatial data retrieved from e.g. web feature services are not always documented. This lack of background knowledge is a major challenge for automatic semantic data integration. Focusing on this problem, this article presents an automatic approach for geospatial data integration in NSDS. This approach does a schema mapping according to the result of an ontology matching corresponding to a semantic interpretation process. This process is based on geocoding and natural language processing. This article extends work done in a previous publication by an improved unit detection algorithm, data quality and provenance enrichments, the detection of feature clusters. It also presents an improved evaluation process to better assess the performance of this approach compared to a manually created ontology. These experiments have shown the automatic approach obtains an error of semantic interpretation around 10% according to a manual approach.

更新日期：2020-02-10
• Computing (IF 2.063) Pub Date : 2019-10-01
Ioanna Lytra, Carlos Carrillo, Rafael Capilla, Uwe Zdun

Abstract Over the past 10 years software architecture has been perceived as the result of a set of architecture design decisions rather than the elements that form part of the software design. As quality attributes are considered major drivers of the design process to achieve high quality systems, the design decisions that drive the selection and use of specific quality properties and vice versa are closely related. Consequently, quality attributes must play a role for decision making processes and be documented alongside the decisions captured. Consequently, we conduct a systematic literature review to study the importance and impact of the relationships between quality attributes and architecture design decisions and to what extent existing architecture knowledge management methods and tools deal with the decisions that affect the quality of a system. We also report on the challenges and future research paths for architectural knowledge management methods and tools. Our results reveal important explicit relationships between both software artifacts, the role of uncertainty in decision making and empirical studies reporting the use of quality attributes in architecture knowledge management activities.

更新日期：2020-02-10
• Computing (IF 2.063) Pub Date : 2019-08-10
Chinmaya Kumar Swain, Neha Saini, Aryabartta Sahu

更新日期：2020-02-10
• Computing (IF 2.063) Pub Date : 2018-12-05
Denis Kotkov, Jari Veijalainen, Shuaiqiang Wang

Abstract Most recommender systems suggest items that are popular among all users and similar to items a user usually consumes. As a result, the user receives recommendations that she/he is already familiar with or would find anyway, leading to low satisfaction. To overcome this problem, a recommender system should suggest novel, relevant and unexpected i.e., serendipitous items. In this paper, we propose a serendipity-oriented, reranking algorithm called a serendipity-oriented greedy (SOG) algorithm, which improves serendipity of recommendations through feature diversification and helps overcome the overspecialization problem. To evaluate our algorithm, we employed the only publicly available dataset containing user feedback regarding serendipity. We compared our SOG algorithm with topic diversification, popularity baseline, singular value decomposition, serendipitous personalized ranking and Zheng’s algorithms relying on the above dataset. SOG outperforms other algorithms in terms of serendipity and diversity. It also outperforms serendipity-oriented algorithms in terms of accuracy, but underperforms accuracy-oriented algorithms in terms of accuracy. We found that the increase of diversity can hurt accuracy and harm or improve serendipity depending on the size of diversity increase.

更新日期：2020-02-10
• Computing (IF 2.063) Pub Date : 2020-02-06
Pedro Valderas, Victoria Torres, Vicente Pelechano

Abstract Nowadays, end users are surrounded by plenty of services that are somehow supporting their daily routines and activities. Involving end users into the process of service creation can allow end users to benefit from a cheaper, faster, and better service provisioning. Even though we can already find tools that face this challenge, they consider end users as isolate individuals. In this paper, we investigate how social networks can be used to improve the composition of services by end users. To do so, we propose a graph-based definition of a social structure, and analyse how social connections can be exploited to both facilitate end users to discover services through browsing these connections, and recommend services to end users during the composition activity. As proof of concept, we implement and evaluate the proposed social network in the context of EUCalipTool, a mobile end-user environment for composing services.

更新日期：2020-02-07
• Computing (IF 2.063) Pub Date : 2020-02-06
Abdulwahab Aljubairy, Wei Emma Zhang, Ali Shemshadi, Adnan Mahmood, Quan Z. Sheng

Abstract Flight delay is a significant problem that negatively impacts the aviation industry and costs billion of dollars each year. Most existing studies investigated this issue using various methods based on historical data. However, due to the highly dynamic environments of the aviation industry, relying only on historical datasets of flight delays may not be sufficient and applicable to forecast the future of flights. The purpose of this research is to study the flight delays from a new angle by utilising data generated from the emerging Internet of Things (IoT) paradigm. Our primary goal is to improve the understanding of the roots and signs of flight delays as well as discovering related factors. In this paper, we present a framework that aims at improving the flight delay problem. We consider the IoT data generated from distributed sensors that have not been considered in existing works in the analysis of flight delays, and for that purpose, an automatic tool is developed to collect IoT data from various data sources including flight, weather, and air quality index. Based on the heterogeneous data, an algorithm is developed to merge different features from diverse data sources. We adopt predictive modelling to study the factors that contribute to flight delays and to predict the flight delays in the future. The results of our work show a high correlation among the developed features. In particular, the results clearly demonstrate the association between the flight delays and the air quality index factor. In particular, our current prediction model achieves 85.74% in accuracy.

更新日期：2020-02-07
• Computing (IF 2.063) Pub Date : 2020-02-05
Janine Kniess, Samuel Oliveira

Abstract Wireless sensor networks are commonly used to collect observations of real-world phenomena at regular time intervals. Sensor nodes rely on limited power sources, and some studies indicate that the main source of energy consumption is related to data transmissions. In this paper, we propose an approach to reduce data transmissions in sensor nodes based on data dispersion analysis. This approach aims to avoid transmitting measurements whose values present low dispersion while keeping low CPU utilization rate. Performance evaluation results obtained by the Castalia simulator confirm that the results were promising in reducing data transmissions while maintaining significantly low processing time, data accuracy and low energy consumption.

更新日期：2020-02-06
• Computing (IF 2.063) Pub Date : 2020-01-31
Shayan Zamanirad, Boualem Benatallah, Moshe Chai Barukh, Carlos Rodriguez, Reza Nouri

Abstract In law enforcement, investigators are typically tasked with analyzing large collections of evidences in order to identify and extract key information to support investigation cases. In this context, events are key elements that help understanding and reconstructing what happened from the collection of evidence items. With the ever increasing amount of data (e.g., e-mails and content from social media) gathered today as part of investigation tasks (in most part done manually), managing such amount of data can be challenging and prone to missing important details that could be of high relevance to a case. In this paper, we aim to facilitate the work of investigators through a framework for deriving insights from data. We focus on the auto-recognition and dynamic tagging of event types (e.g., phone calls) from (textual) evidence items, and propose a framework to facilitate these tasks and provide support for insights and discovery. The experimental results obtained by applying our approach to a real, legal dataset demonstrate the feasibility of our proposal by achieving good performance in the task of automatically recognizing and tagging event types of interest.

更新日期：2020-01-31
• Computing (IF 2.063) Pub Date : 2020-01-29
Ludovico Boratto, Matteo Manca, Giuseppe Lugano, Marián Gogola

Abstract Journey planners support users in the organization of their trips, by presenting them results with multimodal solutions. While the benefits for the users are straightforward, other stakeholders (such as transport operators and planners) might benefit from understanding how users behave. In this paper, we analyze and characterize user behavior in journey planners, with the aim of getting insights from different perspectives (namely, trip search and both sorting and selection actions related to trip options). Our results show that, in order to characterize user behavior, multiple perspectives have to be taken into account, and that users speaking different languages behave differently.

更新日期：2020-01-30
• Computing (IF 2.063) Pub Date : 2020-01-29
David Ralph, Yunjia Li, Gary Wills, Nicolas G. Green

Abstract This paper examines the challenging problem of new user cold starts in subset labelled and extremely sparsely labelled big data. We introduce a new Isle of Wight Supply Chain (IWSC) dataset demonstrating these characteristics. We also introduce a new technique addressing these challenges, the Transitive Semantic Relationships (TSR) model, which infers potential relationships from user and item text content and few labelled examples. We perform both implicit and explicit evaluation of TSR as a recommender system and from new user cold starts we achieve a hit-rate@10 of 77% on a collection of 630 items with only 376 supply-chain consumer labels, and 67% with only 142 supply-chain supplier labels, demonstrating a high level of performance even with extremely few labels in challenging cold-start scenarios. TSR is suitable for any dataset featuring few labels and user and item content, where similarity of content indicates similar relationship forming capability. TSR can be used as a standalone recommender system or to complement existing high-performance recommender models that require more labels or do not support cold starts.

更新日期：2020-01-30
• Computing (IF 2.063) Pub Date : 2020-01-16
Hammad ur Rehman Qaiser, Gao Shu

Abstract Workload uncertainty has been increased with the integration of the Internet of Things to the computing grid i.e. edge computing and cloud data centers. Therefore, efficient resource utilization in cloud data centers become more challenging. Dynamic consolidation of virtual machines on optimal number of processing machines can increase the efficiency of resource utilization in cloud data centers. This process requires the migration of virtual machines from the under-utilized and over-utilized processing machines to other suitable machines. In this work, the problem of efficient replacement of virtual machines is solved using a game theory based well known technique, Nash Equilibrium (NE). We designed a nash equilibrium based dual on two players, over-load manager and under-load manager, to deduce the dominant strategy profiles for various scenarios during consolidation cycles. Dominant strategy profile is the set of strategies where every player has no incentive in deviation, thus leading to equilibrium position. A virtual machines redeployment algorithm, Nash Equilibrium based Virtual Machines Replacement (NE-VMR), has been proposed on the basis of the dominant strategy profiles for efficient consolidation. Experiment results show that NE-VMR is a more efficient server consolidation technique, saved 30% energy and improved 35% quality of service as compared to baselines.

更新日期：2020-01-16
• Computing (IF 2.063) Pub Date : 2020-01-11
Luca Cagliero, Paolo Garza, Giuseppe Attanasio, Elena Baralis

Forecasting the stock markets is among the most popular research challenges in finance. Several quantitative trading systems based on supervised machine learning approaches have been presented in literature. Recently proposed solutions train classification models on historical stock-related datasets. Training data include a variety of features related to different facets (e.g., stock price trends, exchange volumes, price volatility, news and public mood). To increase the accuracy of the predictions, multiple models are often combined together using ensemble methods. However, understanding which models should be combined together and how to effectively handle features related to different facets within different models are still open research questions. In this paper we investigate the use of ensemble methods to combine faceted classification models for supporting stock trading. To this aim, separate classification models are trained on each subset of features belonging to the same facet. They produce trading signals tailored to a specific facet. Signals are then combined together and filtered to generate a unified, multi-faceted recommendation. The experimental validation, performed on different markets and in different conditions, shows that, in many cases, some of the faceted models perform as good as or better than models trained on a mix of different features. An ensemble of the faceted recommendations makes the generated trading signals more profitable yet robust to draw-down periods.

更新日期：2020-01-13
• Computing (IF 2.063) Pub Date : 2020-01-11
Khalid Belhajjame, Noura Faci, Zakaria Maamar, Vanilson Burégio, Edvan Soares, Mahmoud Barhamgi

Abstract Computing-intensive experiments in modern sciences have become increasingly data-driven illustrating perfectly the Big-Data era. These experiments are usually specified and enacted in the form of workflows that would need to manage (i.e., read, write, store, and retrieve) highly-sensitive data like persons’ medical records. We assume for this work that the operations that constitute a workflow are 1-to-1 operations, in the sense that for each input data record they produce a single data record. While there is an active research body on how to protect sensitive data by, for instance, anonymizing datasets, there is a limited number of approaches that would assist scientists with identifying the datasets, generated by the workflows, that need to be anonymized along with setting the anonymization degree that must be met. We present in this paper a solution privacy requirements of datasets used and generated by a workflow execution. We also present a technique for anonymizing workflow data given an anonymity degree.

更新日期：2020-01-13
• Computing (IF 2.063) Pub Date : 2020-01-11
Daniela Renga, Daniele Apiletti, Danilo Giordano, Matteo Nisi, Tao Huang, Yang Zhang, Marco Mellia, Elena Baralis

Abstract Data-driven models are becoming of fundamental importance in electric distribution networks to enable predictive maintenance, to perform effective diagnosis and to reduce related expenditures, with the final goal of improving the electric service efficiency and reliability to the benefit of both the citizens and the grid operators themselves. This paper considers a dataset collected over 6 years in a real-world medium-voltage distribution network by the Supervisory Control And Data Acquisition (SCADA) system. A transparent, exploratory, and exhaustive data-mining workflow, based on data characterisation, time-windowing, association rule mining, and associative classification is proposed and experimentally evaluated to automatically identify correlations and build a prognostic–diagnostic model from the SCADA events occurring before and after specific service interruptions, i.e., network faults. Our results, evaluated by both data-driven quality metrics and domain expert interpretations, highlight the capability to assess the limited predictive capability of the SCADA events for medium-voltage distribution networks, while their effective exploitation for diagnostic purposes is promising.

更新日期：2020-01-13
• Computing (IF 2.063) Pub Date : 2020-01-11
Mahsa Fozuni Shirjini, Saeed Farzi, Amin Nikanjam

Social network analysis has become an important topic for researchers in sociology and computer science. Similarities among individuals form communities as the basic constitutions of social networks. Regarding the importance of communities, community detection is a fundamental step in the study of social networks typically modeled as large-scale graphs. Detecting communities in such large-scale graphs which generally suffers from the curse of dimensionality is the main objective followed in this study. An efficient modularity-based community detection algorithm called MDPCluster is introduced in order to detect communities in large-scale graphs in a timely manner. To address the high dimensionality problem, first, a Louvain-based algorithm is utilized by MDPCluster to distinguish initial communities as super-nodes and then a Modified Discrete Particle Swarm Optimization algorithm, called MDPSO is leveraged to detect communities through maximizing modularity measure. MDPSO discretizes Particle Swarm Optimization using the idea of transmission tendency and also escapes from premature convergence thereby a mutation operator inspired by Genetic Algorithm. To evaluate the proposed method, six standard datasets, i.e., American College Football, Books about US Politics, Amazon Product Co-purchasing, DBLP, GR-QC and HEP-TH have been employed. The first two are known as synthetic datasets whereas the rest are real-world datasets. In comparison to eight state-of-the-art algorithms, i.e., Stationary Genetic Algorithm, Generational Genetic Algorithm, Simulated Annealing-Stationary Genetic Algorithm, Simulated Annealing-Generational Genetic Algorithm, Grivan–Newman, Danon and Label Propagation Algorithm, the results indicate the superiorities of MDCluster in terms of modularity, Normalized Mutual Information and execution time as well.

更新日期：2020-01-13
• Computing (IF 2.063) Pub Date : 2019-06-03
Abdullah Lakhan, Xiaoping Li

Abstract Mobile Cloudlet Computing paradigm (MCC) allows execution of resource-intensive mobile applications using computation cloud resources by exploiting computational offloading method for resource-constrained mobile devices. Whereas, computational offloading needs the mobile application to be partitioned during the execution in the MCC so that total execution cost is minimized. In the MCC, at the run-time network contexts (i.e., network bandwidth, signal strength, latency, etc.) are intermittently changed, and transient failures (due to temporary network connection failure, services busy, database disk out of storage) often occur for a short period of time. Therefore, transient failure aware partitioning of the mobile application at run-time is a challenging task. Since, existing MCC offers computational monolithic services by exploiting heavyweight virtual machines, which incurs with long VM startup time and high overhead, and these cannot meet the requirements of fine-grained microservices applications (e.g., E-healthcare, E-business, 3D-Game, and Augmented Reality). To cope up with prior issues, we propose microservices based mobile cloud platform by exploiting containerization which replaces heavyweight virtual machines, and we propose the application partitioning task assignment (APTA) algorithm which determines application partitioning at run-time and adopts the fault aware (FA) policy to execute microservices applications robustly without interruption in the MCC. Simulation results validate that the proposed microservices mobile cloud platform not only shrinks the setup time of run-time platform but also reduce the energy consumption of nodes and improve the application response time by exploiting APTA and FA to the existing VM based MCC and application partitioning strategies.

更新日期：2020-01-13
• Computing (IF 2.063) Pub Date : 2019-07-24
Manojit Ghose, Sawinder Kaur, Aryabartta Sahu

Cloud computing has emerged to be a promising computing paradigm of the recent time. As the high energy consumption in the cloud system creates several problems, the cloud service providers need to focus on the energy consumption along with providing the required service to their users. Cloud system needs to efficiently execute various real-time applications and designing energy-efficient scheduling algorithms for these applications has gained the research momentum. In this paper, we consider scheduling of real-time tasks for a virtualized cloud system which provides VMs with discrete compute capacities. Depending on the characteristics of the tasks, we divide the problem into four subproblems and propose solution for each subproblem. For the subproblem with arbitrary execution time and deadline of tasks, we use four different methods to cluster the tasks depending on their deadline values. Experiment is performed in CloudSim tool to make a comparison among the clustering methods and results show that the clustering method can be chosen based on the specification of the cloud system. We also made a comparison of our approach with standard energy-efficient scheduling technique both for the synthetic data sets and for the real world trace and we observed an average energy reduction of around $$17\%$$ and $$15\%$$ for the synthetic data sets and for the real world trace respectively (as compared to the baseline policy).

更新日期：2020-01-13
• Computing (IF 2.063) Pub Date : 2019-05-31
Yueqiang Shang

Abstract Based on a fully overlapping domain decomposition technique, a parallel stabilized equal-order finite element method for the steady Stokes equations is presented and studied. In this method, each processor computes a local stabilized finite element solution in its own subdomain by solving a global problem on a global mesh that is locally refined around its subdomain, where the lowest equal-order finite element pairs (continuous piecewise linear, bilinear or trilinear velocity and pressure) are used for the finite element discretization and a pressure-projection-based stabilization method is employed to circumvent the discrete inf–sup condition that is invalid for the used finite element pairs. The parallel stabilized method is unconditionally stable, free of parameter and calculation of derivatives, and is easy to implement based on an existing sequential solver. Optimal error estimates are obtained by the theoretical tool of local a priori error estimates for finite element solutions. Numerical results are also given to verify the theoretical predictions and illustrate the effectiveness of the method.

更新日期：2020-01-13
• Computing (IF 2.063) Pub Date : 2019-06-29
Geomar A. Schreiner, Denio Duarte, Ronaldo dos Santos Mello

Abstract Big Data management has brought several challenges to data-centric applications, like the support to data heterogeneity, rapid data growth and huge data volume. NoSQL databases have been proposed to tackle Big Data challenges by offering horizontal scalability, schemaless data storage and high availability, among others. However, NoSQL databases do not have a standard query language, which bring on a steep learning curve for developers. On the other hand, traditional relational databases and SQL are very popular standards for storing and manipulating critical data, but they are not suitable to Big Data management. One solution for relational-based applications to move to NoSQL databases is to offer a way to access NoSQL databases through SQL instructions. Several approaches have been proposed for translating relational database schemata and operations to equivalent ones in NoSQL databases in order to improve scalability and availability. However, these approaches map relational databases only to a single NoSQL data model and, sometimes, to a specific NoSQL database product. This paper presents a canonical approach, called SQLToKeyNoSQL, that translates relational schemata as well as SQL instructions to equivalent schemata and access methods of any key-oriented NoSQL database. We present the architecture of our layer focusing on the mapping strategies as well as experiments that evaluate the benefits of our approach against some state-of-art baselines.

更新日期：2020-01-13
• Computing (IF 2.063) Pub Date : 2019-06-21
Mingzhi Chen, Daqi Zhu

Due to the unique nature of underwater acoustic communication, data collection from the Underwater Acoustic Sensor Networks (UASNs) is a challenging problem. It has been reported that data collection from the UASNs with the assistance of the autonomous underwater vehicles (AUVs) will be more convenient. The AUV needs to schedule a tour to contact all sensors once, which is a variant of the Traveling Salesman Problem. A hybrid optimization algorithm is proposed for the solution of the problem. The algorithm combines the quantum-behaved particle swarm optimization and improved ant colony optimization algorithms. It is an algorithm with quadratic complexity, which can yield approximate but satisfactory results for the problem. Simulation experiments are carried out to demonstrate the efficiency of the algorithm. Compared to the Self-Organizing Map based (SOM-based) algorithm, it not only plans a shorter tour, but also shortens the distance from the sensor to its closest waypoint. Therefore, the algorithm can reduce the energy required for data transmission since the communication distance drops, and the service life of the sensor can be extended.

更新日期：2020-01-13
• Computing (IF 2.063) Pub Date : 2020-01-09
Khaled Fawagreh, Mohamed Medhat Gaber

Abstract In predictive healthcare data analytics, high accuracy is both vital and paramount as low accuracy can lead to misdiagnosis, which is known to cause serious health consequences or death. Fast prediction is also considered an important desideratum particularly for machines and mobile devices with limited memory and processing power. For real-time health care analytics applications, particularly the ones that run on mobile devices, such traits (high accuracy and fast prediction) are highly desirable. In this paper, we propose to use an ensemble regression technique based on CLUB-DRF, which is a pruned Random Forest that possesses these features. The speed and accuracy of the method have been demonstrated by an experimental study on three medical data sets of three different diseases.

更新日期：2020-01-09
• Computing (IF 2.063) Pub Date : 2020-01-09
Munish Bhatia, Sandeep K. Sood, Simranpreet Kaur

Load scheduling has been a major challenge in distributed fog computing environments for meeting the demands of decision-making in real-time. This research proposes an quantumized approach for scheduling heterogeneous tasks in fog computing-based applications. Specifically, a node-specific metric is defined in terms of Node Computing Index for estimating the computational capacity of fog computing nodes. Moreover, QCI-Neural Network Model is proposed for predicting the optimal fog node for handling the heterogeneous task in real-time. In order to validate the proposed approach, experimental simulations were performed in different cases using 5, 10, 15, 20 fog nodes to schedule heterogeneous tasks obtained from online Google Job datasets. A comparative analysis was performed with state-of-the-art scheduling models like Heterogeneous Earliest Finish Time, Min–Max, and Round Robin were used for comparative analysis to determine performance enhancement. Better performance was acquired for the proposed approach with execution delay of 30.01s for 20 nodes. In addition to this, high values of statistical estimators like specificity (90.99%), sensitivity (89.76%), precision (91.15%) and coverage (94.56%) were registered to depict the enhancement in overall system performance.

更新日期：2020-01-09
• Computing (IF 2.063) Pub Date : 2020-01-09
Liuyan Chen, Lukasz Golab

In computational linguistics, binary sentiment analysis methods have been proposed to predict whether a document expresses a positive or a negative opinion. In this paper, we study a unique research problem—identifying environmental stimuli that contribute to different moods (mood triggers). Our analysis is enabled by an anonymous micro-journalling dataset, containing over 700,000 short journals from over 67,000 writers and their self-reported moods at the time of writing. We first build a multinomial logistic regression model to predict the mood (e.g., happy, sad, tired, productive) associated with a micro-journal. We then examine the model to identify predictive words and word trigrams associated with various moods. Our study offers new data-driven insights into public well-being.

更新日期：2020-01-09
• Computing (IF 2.063) Pub Date : 2020-01-09
Samar Haytamy, Fatma Omara

The service composition problem in Cloud computing is formulated as a multiple criteria decision-making problem. Due to the extensive search space, Cloud service composition is addressed as an NP-hard problem. In addition, it is a long term based and economically driven. Building an accurate economic model for service composition has great attention to interest and importance for the Cloud consumer. A deep learning based service composition (DLSC) framework has been proposed in this paper. The proposed DLSC framework is considered an amalgamation between the deep learning long short term memory (LSTM) network and particle swarm optimization (PSO) algorithm. The LSTM network is applied to accurately predict the Cloud QoS provisioned values, and the output of LSTM network is fed to PSO algorithm to compose the best Cloud providers to contract with them for composing the needed services to minimize the consumer cost function. The proposed DLSC framework has been implemented using a real QoS dataset. According to the comparative results, it is found that the performance of the proposed framework outperforms the existing models with respect to the predictive accuracy and composition accuracy.

更新日期：2020-01-09
• Computing (IF 2.063) Pub Date : 2020-01-08

The overhead of data transfer to the GPU poses a bottleneck for the performance of CUDA programs. The accurate prediction of data transfer time is quite effective in improving the performance of GPU analytical modeling, the prediction accuracy of kernel performance, and the composition of the CPU with the GPU for solving computational problems. For estimating the data transfer time between the CPU and the GPU, the current study employs three machine learning-based models and a new analytical model called $$\lambda$$-Model. These models run on four GPUs from different NVIDIA architectures and their performance is compared. The practical results show that the $$\lambda$$-Model is able to anticipate the transmission of large-sized data with a maximum error of 1.643%, which offers better performance than that of machine learning methods. As for the transmission of small-sized data, machine learning-based methods provide better performance and a predicted data transfer time with a maximum error of 4.52%. Consequently, the current study recommends a hybrid model, that is, the $$\lambda$$-Model for large-sized data and machine learning tools for small-sized data.

更新日期：2020-01-09
• Computing (IF 2.063) Pub Date : 2020-01-08
Emir Ugljanin, Ejub Kajan, Zakaria Maamar, Muhammad Asim, Vanilson Burégio

Abstract This paper presents an approach for allowing the transparent co-existence of citizens and IoT-compliant things in smart cities. Considering the particularities of each, the approach embraces two concepts known as social machines and data artifacts. On the one hand, social machines act as wrappers over applications (e.g., social media) that allow citizens and things to have an active role in their cities by reporting events of common interest to the population, for example. On the other hand, data artifacts abstract citizens’ and things’ contributions in terms of who has done what, when, where, and why. For successful smart cities, the approach relies on the willingness and engagement of both citizens and things. Smart cities’ initiatives are embraced and not imposed. A case study along with a testbed that uses a real dataset about car-traffic accident in a state in Brazil demonstrate the technical doability and scalability of the approach. The evaluation consists of assessing the time to drill into the different generated data artifacts prior to generating useful details for decision makers.

更新日期：2020-01-08
• Computing (IF 2.063) Pub Date : 2020-01-04
Xuewen Xia, Yichao Tang, Bo Wei, Yinglong Zhang, Ling Gui, Xiong Li

To satisfy the distinct requirements of different evolutionary stages, a dynamic multi-swarm global particle swarm optimization (DMS-GPSO) is proposed in this paper. In DMS-GPSO, the entire evolutionary process is segmented as an initial stage and a later stage. In the initial stage, the entire population is divided into a global sub-swarm and multiple dynamic multiple sub-swarms. During the evolutionary process, the global sub-swarm focuses on the exploitation under the guidance of the optimal particle in the entire population, while the dynamic multiple sub-swarms pour more attention on the exploration under the guidance of the neighbor’s best-so-far position. Moreover, a store operator and a reset operator applied in the global sub-swarm are used to save computational resource and increase the population diversity, respectively. At the later stage, some elite particles stored in an archive are combined with the DMS sub-swarms as a single population to search for optimal solutions, intending to enhance the exploitation ability. The effect of the new introduced strategies is verified by extensive experiments. Besides, the comparison results among DMS-GPSO and other 9 peer algorithms on CEC2013 and CEC2017 test suites demonstrate that DMS-GPSO can effectively avoid the premature convergence when solving multimodal problems, and yield more favorable performance in complex problems.

更新日期：2020-01-04
• Computing (IF 2.063) Pub Date : 2020-01-02
Sridhar Chimalakonda, Kesav V. Nori

Abstract Rapid advances in education domain demand the design and customization of educational technologies for a large scale and variety of evolving requirements. Here, scale is the number of systems to be developed and variety stems from a diversified range of instructional designs such as varied goals, processes, content, teaching styles, learning styles and, also for eLearning Systems for 22 Indian Languages and variants. In this paper, we present a family of software product lines as an approach to address this challenge of modeling a family of instructional designs as well as a family of eLearning Systems and demonstrate it for the case of adult literacy in India (287 million learners). We present a multi-level product line that connects product lines at multiple levels of granularity in education domain. We then detail two concrete product lines (http://rice.iiit.ac.in), one that generates instructional design editors and two, which generates a family of eLearning Systems based on flexible instructional designs. Finally, we demonstrate our approach by generating eLearning Systems for Hindi and Telugu languages, which led to significant cost savings of 29 person-months for 9 eLearning Systems.

更新日期：2020-01-04
• Computing (IF 2.063) Pub Date : 2019-12-23
Yuming Li, Pin Ni, Victor Chang

The role of the stock market across the overall financial market is indispensable. The way to acquire practical trading signals in the transaction process to maximize the benefits is a problem that has been studied for a long time. This paper put forward a theory of deep reinforcement learning in the stock trading decisions and stock price prediction, the reliability and availability of the model are proved by experimental data, and the model is compared with the traditional model to prove its advantages. From the point of view of stock market forecasting and intelligent decision-making mechanism, this paper proves the feasibility of deep reinforcement learning in financial markets and the credibility and advantages of strategic decision-making.

更新日期：2020-01-04
• Computing (IF 2.063) Pub Date : 2019-12-09
David Castells-Graells, Christopher Salahub, Evangelos Pournaras

Abstract Bike usage in Smart Cities is paramount for sustainable urban development: cycling promotes healthier lifestyles, lowers energy consumption, lowers carbon emissions, and reduces urban traffic. However, the expansion and increased use of bike infrastructure has been accompanied by a glut of bike accidents, a trend jeopardizing the urban bike movement. This paper leverages data from a diverse spectrum of sources to characterise geolocated bike accident severity and, ultimately, study cycling risk and discomfort. Kernel density estimation generates a continuous, empirical, spatial risk estimate which is mapped in a case study of Zürich city. The roles of weather, time, accident type, and severity are illustrated. A predominance of self-caused accidents motivates an open-source software artifact for personalized route recommendations. This software is used to collect open baseline route data that are compared with alternative routes minimizing risk and discomfort. These contributions have the potential to provide invaluable infrastructure improvement insights to urban planners, and may also improve the awareness of risk in the urban environment among experienced and novice cyclists alike.

更新日期：2020-01-04
• Computing (IF 2.063) Pub Date : 2019-01-24
Júlio Mendonça, Ermeson Andrade, Ricardo Lima

Abstract Energy consumption, execution time, and availability are common terms in discussions on application development for mobile devices. Mobile applications executing in a mobile cloud computing (MCC) environment must consider several issues, such as Internet connections problems and CPU performance. Misconceptions during the design phase can have a significant impact on costs and time-to-market, or even make the application development unfeasible. Anticipating the best configuration for each type of application is a challenge that many developers are not prepared to tackle. In this work, we propose models to rapidly estimate execution time, availability, and energy consumption of mobile applications executing in an MCC environment. We defined a methodology to create and validate Deterministic and Stochastic Petri net (DSPN) models to evaluate these three critical metrics. The DSPNs results were compared with results obtained through experiments performed on a testbed environment. We analyzed an image processing application, regarding connections type (WLAN, WiFi, and 3G), servers type (MCC or cloudlet), and functionalities performance. Our numerical analyses indicate, for instance, that the use of a cloudlet significantly improves performance and energy efficiency. Besides, the baseline scenario took us one month to implement, while modeling and evaluation the three scenarios required less than one day. In this way, our DSPN models represent a powerful tool for mobile developers to plan efficient and cost-effective mobile applications. They allow rapidly assess execution time, availability, and energy consumption metrics to improve the quality of mobile applications.

更新日期：2020-01-04
• Computing (IF 2.063) Pub Date : 2019-03-29
Dong-Oh Kim, Hong-Yeon Kim, Young-Kyun Kim, Jeong-Joon Kim

Replication has been widely used to ensure the data availability in a distributed file system. In recent years, erasure coding (EC) has been adopted to overcome the problem of space efficiency in Replication. However, EC has various performance degrading factors such as parity calculation and degraded input/output. In particular, the recovery performance of EC is degraded because of various factors when the distributed file systems become large. Nonetheless, few studies have been conducted to improve the recovery performance. Thus, this paper proposes an efficient parallel recovery technique in an EC-based distributed file system. We describe the contention avoidance method, chunk allocation method, and asynchronous recovery method, to improve the parallel recovery performance. The contention avoidance method can minimize the contention for resources. The chunk allocation method and asynchronous recovery method can increase the efficiency of the parallel recovery. Finally, we verify that when the proposed parallel recovery technique in this paper is applied to actual distributed file systems, its recovery performance is improved by 263% compared to that of existing methods in the performance evaluation.

更新日期：2020-01-04
• Computing (IF 2.063) Pub Date : 2019-04-12
Luise Pufahl, Mathias Weske

Organizations strive for efficiency in their business processes by process improvement and automation. Business process management (BPM) supports these efforts by capturing business processes in process models serving as blueprint for a number of process instances. In BPM, process instances are typically considered running independently of each other. However, batch processing–the collectively execution of several instances at specific process activities—is a common phenomenon in operational processes to reduce cost or time. Currently, batch processing is organized manually or hard-coded in software. For allowing stakeholders to explicitly represent their batch configurations in process models and their automatic execution, this paper provides a concept for batch activities and describes the corresponding execution semantics. The batch activity concept is evaluated in a two-step approach: a prototypical implementation in an existing BPM System proves its feasibility. Additionally, batch activities are applied to different use cases in a simulated environment. Its application implies cost-savings when a suitable batch configuration is selected. The batch activity concept contributes to practice by allowing the specification of batch work in process models and their automatic execution, and to research by extending the existing process modeling concepts.

更新日期：2020-01-04
• Computing (IF 2.063) Pub Date : 2019-02-27
Jun Gao, Yi Lu Murphey, Honghui Zhu

Abstract Side swipe accidents occur primarily when drivers attempt an improper lane change, drift out of lane, or the vehicle loses lateral traction. In this paper, a fusion approach is introduced that utilizes multiple differing modality data, such as video data, GPS data, wheel odometry data, potentially IMU data collected from data logging device (DL1 MK3) for detecting driver’s behavior of lane changing by using a novel dimensionality reduction model, collaborative representation optimized projection classifier (CROPC). The criterion of CROPC is maximizing the collaborative representation based between-class scatter and minimizing the collaborative representation based within-class scatter in the transformed space simultaneously. For lane change detection, both feature-level fusion and decision-level fusion are considered. In the feature-level fusion, features generated from multiple differing modality data are merged before classification while in the decision-level fusion, an improved Dempster–Shafer theory based on correlation coefficient, DST-CC is presented to combine the classification outcomes from two classifiers, each corresponding to one kind of the data. The results indicate that the introduced fusion approach using a CROPC performs significantly better in terms of detection accuracy, in comparison to other state-of-the-art classifiers.

更新日期：2020-01-04
• Computing (IF 2.063) Pub Date : 2019-03-27
Andreas Kosmatopoulos, Anastasios Gounaris, Kostas Tsichlas

Abstract Over the past few years, there has been a rapid increase of data originating from evolving networks such as social networks, sensor networks and others. A major challenge that arises when handling such networks and their respective graphs is the ability to issue a historical query on their data, that is, a query that is concerned with the state of the graph at previous time instances. While there has been a number of works that index the historical data in a time-centric manner (i.e. according to the time instance an update event occurs), in this work, we focus on the less-explored vertex-centric storage approach (i.e. according to the entity in which an update event occurs). We demonstrate that the design choices for a vertex-centric model are not trivial, by proposing two different modelling and storage models that leverage NoSQL technology and investigating their tradeoffs. More specifically, we experimentally evaluate the two models and show that under certain cases, their relative performance can differ by several times. Finally, we provide evidence that simple baseline and non-NoSQL solutions are slower by up to an order of magnitude.

更新日期：2020-01-04
• Computing (IF 2.063) Pub Date : 2018-12-13
Sankar Mukherjee, Daya Sagar Gupta, G. P. Biswas

With the rapid increase in the internet technologies, Vehicular Ad hoc Networks (VANETs) are identified as a crucial primitive for the vehicular communication in which the moving vehicles are treated as nodes to form a mobile network. To ameliorate the efficiency and traffic security of the communication, a VANET can wirelessly circulate the traffic information and status to the participating vehicles (nodes). Before deploying a VANET, a security and privacy mechanism must be implemented to assure the secure communication. Due to this issue, a number of conditional privacy-preserving authentication schemes are proposed in the literature to guarantee the mutual authentication and privacy protection. However, most of these schemes use the Diffie–Hellman (DH) problems to secure the communication. Note that, these DH-type problems can be solved in polynomial-time in the presence of new modern technologies like quantum computers. Therefore, to remove these difficulties, we motivated to attempt a non-DH type conditional privacy-preserving authentication scheme which can resist the quantum computers. In this paper, we developed the first lattice-based conditional privacy-preserving authentication (LB-CPPA) protocol for VANETs. A random oracle model is used to analyze the security of proposed protocol. The security of our LB-CPPA scheme is based on the complexity of lattice problems. By security analysis, we show that our proposal endorses the message integrity and authentication as well as the privacy preservation at the same time. A security comparison of our claim is also done. Further, we analyze the performance of the proposed scheme and compare it with the DH-type schemes.

更新日期：2020-01-04
• Computing (IF 2.063) Pub Date : 2019-05-02
Enver Ever, Purav Shah, Leonardo Mostarda, Fredrick Omondi, Orhan Gemikonakli

Wireless sensor networks (WSNs) form a large part of the ecosystem of the Internet of Things (IoT), hence they have numerous application domains with varying performance and availability requirements. Limited resources that include processing capability, queue capacity, and available energy in addition to frequent node and link failures degrade the performance and availability of these networks. In an attempt to efficiently utilise the limited resources and to maintain the reliable network with efficient data transmission; it is common to select a clustering approach, where a cluster head is selected among the diverse IoT devices. This study presents the stochastic performance as well as the energy evaluation model for WSNs that have both node and link failures. The model developed considers an integrated performance and availability approach. Various duty cycling schemes within the medium-access control of the WSNs are also considered to incorporate the impact of sleeping/idle states that are presented using analytical modeling. The results presented using the proposed analytical models show the effects of factors such as failures, various queue capacities and system scalability. The analytical results presented are in very good agreement with simulation results and also present an important fact that the proposed models are very useful for identification of thresholds between WSN system characteristics.

更新日期：2020-01-04
• Computing (IF 2.063) Pub Date : 2019-02-18
Georgios Sakellariou, Anastasios Gounaris

Abstract The significance of data analytics has been acknowledged in many scientific and business domains. However, the required processing power and memory capacity is a prohibiting factor for performing data analytics on proprietary platforms. An obvious solution is the outsourcing of data analytics to cloud storage and cloud computing providers but this entails that privacy and security issues are raised, given the fact that data can be valuable and/or personal. The aim of this paper is the development of a server-side k-means algorithm over encrypted data using homomorphic encryption in order to overcome both the lack of resources of the data owner and the security concerns. Current solutions that deal with homomorphic encryption impose a heavy load on the side of the data owner; this limitation is now addressed in this work. More specifically, in this paper, we present a framework for the implementation of an homomorphic version of k-means, we discuss the capabilities of the current state-of-the-art homomorphic encryption schemes, and we propose a novel approach to server-side computation of k-means assuming a new adversary model tailored to modern settings. We instantiate our framework in two different versions in terms of operation assignment each coming in three flavors of operation implementation. All alternatives are evaluated thoroughly using both real experiments and analytic cost models.

更新日期：2020-01-04
• Computing (IF 2.063) Pub Date : 2019-05-09
Jun Jiang, Liangcai Zeng, Bin Chen, Yang Lu, Wei Xiong

Abstract The traditional calibration paradigms fail to give reliable and accurate results in case of low-quality 2D planar calibration plates. In this paper, an active method is proposed by employing an LCD panel for camera calibration. This method automatically generates a sequence of virtual patterns in different views by pre-defined transforms without manually manipulation or other equipment’s help to move the patterns. Then, the projections of virtual patterns are captured by a camera. The homography between projective patterns in virtual world coordinate and their images is calculated directly to obtain the camera parameters. Experimental results show that the calibration error is 0.018 pixel in terms of mean re-projection error by using 18 virtual patterns, which is significantly less than the state-of-the-art methods. The proposed scheme makes camera calibration flexible and easy to use.

更新日期：2020-01-04
• Computing (IF 2.063) Pub Date : 2019-11-30
Jyothi Kunchala, Jian Yu, Sira Yongchareon, Chengfei Liu

Artifact-centric approach to business process modeling has received considerable attention for elevating data logic to the same level as the process flow logic. With the emergence of this modeling paradigm, several recent works have focused on synthesizing the indispensable lifecycles of key business entities called artifacts from the standalone activity-centric processes. However, synthesizing artifact lifecycles from the inter-organizational business processes (IOBP) is challenging, as the artifacts and states are shared among two or more collaborating processes. Thus, unlike a standalone process, the synthesis of artifact lifecycles from an IOBP require the process interactions to be captured by preserving the dependencies between the involved artifacts and states in the resulting lifecycles. Therefore, in this paper, we propose an automated approach that aims at merging the collaborating processes of an IOBP in order to support the synthesis of artifact lifecycles from an IOBP. The proposed approach is comprised of algorithms that combine the nodes of collaborating processes to generate an integrated process that can be used to synthesize the artifact lifecycles pertinent to an IOBP. We demonstrate the proposed approach using an e-business process scenario and the validity is proved using theorems and a prototype implementation.

更新日期：2020-01-04
• Computing (IF 2.063) Pub Date : 2019-11-27
Yuan-Ko Huang

Currently, many of the processing techniques for the location-based queries provide information of a single type of spatial objects, based on their spatial closeness to the query object. However, in real-life applications the user may be interested in obtaining information about different types of objects, in terms of their quality, cost, and neighboring relationship. We term the different types of objects with better quality and closer to each other the Neighboring skyline set (or NS set). Three new types of location-based queries, the Distance-based neighboring skyline query (Dist-NS query), the Cost-based neighboring skyline query (Cost-NS query), and the Budget-based neighboring skyline query (BGT-NS query), are presented to determine the NS sets according to user’s specific requirement. A R-tree-based index, the $$R^{a,c}$$-tree, is first designed to manage each type of objects with their locations, attributes, and costs. Then, a simultaneous traversal of the $$R^{a,c}$$-trees built on different types of objects is employed with several pruning criteria to prune the non-qualifying object sets as early as possible, so as to improve the query performance. Extensive experiments using the synthetic dataset demonstrate the efficiency and the effectiveness of the proposed algorithms.

更新日期：2020-01-04
• Computing (IF 2.063) Pub Date : 2019-11-25
Jinbao Xie, Yongjin Hou, Yujing Wang, Qingyan Wang, Baiwei Li, Shiwei Lv, Yury I. Vorotnitsky

Abstract Owing to the uneven distribution of key features in Chinese texts, key features play different roles in text recognition in Chinese text classification tasks. We propose a feature-enhanced fusion model based on attention mechanism for Chinese text classification, a long short-term memory (LSTM) network, a convolutional neural network (CNN), and a feature-difference enhancement attention algorithm model. The Chinese text is digitized into a vector form containing certain semantic context information into the embedding layer to train and test the neural network by preprocessing. The feature-enhanced fusion model is implemented by double-layer LSTM and CNN modules to enhance the fusion of text features extracted from the attention mechanism for classifying the classifiers. The feature-difference enhancement attention algorithm model not only adds more weight to important text features but also strengthens the differences between them and other text features. This operation can further improves the effect of important features on Chinese text recognition. The two models are classified by the softmax function. The text classification experiments are conducted based on the Chinese text corpus. The experimental results show that compared with the contrast model, the proposed algorithm can significantly improve the recognition ability of Chinese text features.

更新日期：2020-01-04
• Computing (IF 2.063) Pub Date : 2019-11-21
Faris A. Almalki, Marios C. Angelides

Abstract Having reliable telecommunication systems in the immediate aftermath of a catastrophic event makes a huge difference in the combined effort by local authorities, local fire and police departments, and rescue teams to save lives. This paper proposes a physical model that links base stations that are still operational with aerial platforms and then uses a machine learning framework to evolve ground-to-air propagation model for such an ad hoc network. Such a physical model is quick and easy to deploy and the underlying air-to-ground (ATG) propagation models are both resilient and scalable and may use a wide range of link budget, grade of service (GoS), and quality of service (QoS) parameters to optimise their performance and in turn the effectiveness of the physical model. The prediction results of a simulated deployment of such a physical model and the evolved propagation model in an ad hoc network offers much promise in restoring communication links during emergency relief operations.

更新日期：2020-01-04
• Computing (IF 2.063) Pub Date : 2019-11-18
Kostas Kolomvatsos, Christos Anagnostopoulos

Abstract Data management at the edge of the network can increase the performance of applications as the processing is realized close to end users limiting the observed latency in the provision of responses. A typical data processing involves the execution of queries/tasks defined by users or applications asking for responses in the form of analytics. Query/task execution can be realized at the edge nodes that can undertake the responsibility of delivering the desired analytics to the interested users or applications. In this paper, we deal with the problem of allocating queries to a number of edge nodes. The aim is to support the goal of eliminating further the latency by allocating queries to nodes that exhibit a low load and high processing speed, thus, they can respond in the minimum time. Before any allocation, we propose a method for estimating the computational burden that a query/task will add to a node and, afterwards, we proceed with the final assignment. The allocation is concluded by the assistance of an ensemble similarity scheme responsible to deliver the complexity class for each query/task and a probabilistic decision making model. The proposed scheme matches the characteristics of the incoming queries and edge nodes trying to conclude the optimal allocation. We discuss our mechanism and through a large set of simulations and the adoption of benchmarking queries, we reveal the potentials of the proposed model supported by numerical results.

更新日期：2020-01-04
• Computing (IF 2.063) Pub Date : 2019-11-12
Wentao Liu, Weipeng Jing, Yang Li

Consumers are increasingly influenced by product reviews when purchasing goods or services. At the same time, deceptive reviews usually mislead users. It is inefficient and inaccurate to manually identify deceptive reviews in massive reviews. Therefore, automatically identifying deceptive reviews has become a research trend. Most of existing methods are less effective since they are lack of deeply understanding of reviews. We propose a neural network method with bidirectional long short-term memory (BiLSTM) and feature combination to learn the representation of deceptive reviews. We conduct a large amount of experiments and demonstrate the effectiveness of our proposed method. Specifically, in the mixed-domain detection experiment, the results prove that our model is effective by making comparisons with other neural network-based methods. BiLSTM gives more than 3% improvement in F1 score compared with the most advanced neural network method. Since feature selection plays an important role in this direction, we combine features to improve the performance. Then we get 87.6% F1 value which outperforms the state-of-the-art method. Moreover, in the cross-domain detection experiment, our method achieves 82.4% F1 value which is about 6% higher than the state-of-the-art method on restaurant domain, and it is also robust on doctor domain.

更新日期：2020-01-04
• Computing (IF 2.063) Pub Date : 2019-11-12
Shirui Wang, Wenan Zhou, Chao Jiang

The representational basis for downstream natural language processing tasks is word embeddings, which capture lexical semantics in numerical form to handle the abstract semantic concept of words. Recently, the word embeddings approaches, represented by deep learning, has attracted extensive attention and widely used in many tasks, such as text classification, knowledge mining, question-answering, smart Internet of Things systems and so on. These neural networks-based models are based on the distributed hypothesis while the semantic association between words can be efficiently calculated in low-dimensional space. However, the expressed semantics of most models are constrained by the context distribution of each word in the corpus while the logic and common knowledge are not better utilized. Therefore, how to use the massive multi-source data to better represent natural language and world knowledge still need to be explored. In this paper, we introduce the recent advances of neural networks-based word embeddings with their technical features, summarizing the key challenges and existing solutions, and further give a future outlook on the research and application.

更新日期：2020-01-04
• Computing (IF 2.063) Pub Date : 1983
F. Körner

The quadratic integer programming problem is considered. It will be shown in which order the variablesx1, ...,xn should be ramified in order to reduce the number of knots being studied to a minimum. There areO(n3) operations necessary to determine a favourable ramification. Numerical tests confirm the efficiency of the given algorithm.ZusammenfassungBetrachtet wird das Problem der freien, diskreten, quadratischen Optimierung. Es wird untersucht, in welcher Reihenfolge die Variablenx1, ...,xn im Branch-and-Bound-Prozeß zu verzweigen sind, damit man nur eine minimale Anzahl von Knoten zu untersuchen braucht. Der Aufwand zur Bestimmung einer günstigen Verzweigungsreihenfolge beträgtO(n3) Operationen. Numerische Testrechnungen bestätigen die Effektivität dieses Algorithmus.

更新日期：2020-01-01
• Computing (IF 2.063) Pub Date : 1979
Helmut Jürgensen

ZusammenfassungEs wird ein halbgruppentheoretisches Analogon des Todd-Coxeter-Verfahrens in mehreren Varianten beschrieben. Der Algorithmus zählt eine Transformationendarstellung einer abstrakt präsentierten Halbgruppe auf, wobei der Kern der Darstellung durch eine durch Erzeugende gegebene Unterhalbgruppe bestimmt wird. Das Verfahren bricht genau dann ab, wenn die dabei treu dargestellte Halbgruppe endlich ist.SummarySeveral versions of an algorithm, which is an adaptation of the Todd-Coxeter-algorithm to semigroups, are described. They enumerate a representation by transformations of an abstractly presented semigroups the kernel of this representation being determined by a subsemigroup, which is given by a finite set of generators. The enumeration process stops, if and only if the semigroup faithfully represented in this manner is finite.

更新日期：2020-01-01
• Computing (IF 2.063) Pub Date : 1978
W. Gerdes

ZusammenfassungWir betrachten das Anfangs-Randwertproblem für die Wärmeleitungsgleichung im ℝ3 für ein kompaktes Gebiet mit stetig gekrümmtem Rand und geben dafür einen konstruktiven Existenzsatz mit dem Rotheverfahren, einer transversalen Linienmethode, an. Dabei wird die Zeitvariable diskretisiert und die zu jedem Zeitschritt auftretenden elliptischen Randwertprobleme mit einer Integralgleichungsmethode gelöst. Die so gewonnenen Näherungslösungen konvergieren gegen die exakte Lösung des ursprünglichen Problems. Am Beispiel der Kugel wird dieses Verfahren durchgeführt, wobei sich eine einfache Fehlerabschätzung für die Näherungslösungen ergibt. Für zwei Anfangstemperaturverteilungen zeigt die praktische Berechnung, daß die Integralgleichungsmethode bei verhältnismäßig geringem Aufwand gute Ergebnisse liefert.AbstractWe are looking for a solution of the initial boundary value problem for the threedimensional heat equation in a compact domain with a boundary of continous curvature. We use Rothe's line method, which works by discretisation of the time variable. For every time step there remains an elliptic boundary value problem, which is solved by means of an integral equation. The so obtained approximate solutions converge to the exact solution of the original problem. In case of a sphere we find a simple error estimate for the approximation. For two initial conditions the practical computations show, that the integral equations method yields useful results with relative small effort.

更新日期：2020-01-01
• Computing (IF 2.063) Pub Date : 1967
Eugene L. Allgower,Ronald Guenther

SummaryThis paper deals with the approximation of weak solutions of non-linear elliptic equations of the type $$\sum\limits_{i,j = 1}^n {({\partial \mathord{\left/ {\vphantom {\partial {\partial x_i }}} \right. \kern-\nulldelimiterspace} {\partial x_i }})(a_{ij} (x){{\partial u} \mathord{\left/ {\vphantom {{\partial u} {\partial x_j }}} \right. \kern-\nulldelimiterspace} {\partial x_j }})} + c(x)u = f,$$ where eitherf=f(x, u) orf=f(x, u, ∇u). The differential equation is replaced by difference equations and convergence of the solutions of the difference equations to the solution of the differential equation is proven by functional analytic means. This enables us to give a unified treatment of the convergence of solutions of elliptic difference equations to the solution of the elliptic differential equation.ZusammenfassungDiese Arbeit behandelt die Approximation von schwachen Lösungen nicht-linearer elliptischer Differentialgleichungen des Typs $$\sum\limits_{i,j = 1}^n {({\partial \mathord{\left/ {\vphantom {\partial {\partial x_i }}} \right. \kern-\nulldelimiterspace} {\partial x_i }})(a_{ij} (x){{\partial u} \mathord{\left/ {\vphantom {{\partial u} {\partial x_j }}} \right. \kern-\nulldelimiterspace} {\partial x_j }})} + c(x)u = f,$$ wof=f(x, u) oderf=f(x, u, ∇u). Die Differentialgleichung wird durch Differenzengleichungen ersetzt und die Konvergenz der Lösungen der Differenzengleichungen wird mit Hilfe der Funktionalanalysis bewiesen. Dies ermöglicht uns, eine einheitliche Behandlung der Konvergenz von Lösungen von elliptischen Differenzengleichungen gegen die Lösung einer elliptischen Differentialgleichung anzugeben.

更新日期：2020-01-01
• Computing (IF 2.063) Pub Date : 1987
Jerry Michael Fine

Runge-Kutta-Nyström formulas applicable to the general second order vector initial value problem,y″=f(x, y, y′), are presented. Two families of computational methods requiring five and six evaluations of the function,f, per integration step are derived. The methods consist of embedded formulas fo adjacent order for the solution and its first derivative. This permits a stepping strategy based on error estimates of all components of the numerical solution. From each family of methods a member considered to have good numerical properties has been Selected. Some comparisons of these specific new methods with conventional Runge-Kutta techniques have been made, and the Nyström methods studied here seem to be competitive with some of the best Runge-Kutta methods currently in use.ZusammenfassungEs werden Runge-Kutta-Nyström-Formeln für das allgemeine Anfangswertproblem zweiter Ordnungy″=f(x,y,y′) vorgestellt. Es werden zwei Verfahrensfamilien hergeleitet, die fünf und sechs Auswertungen vonf pro Integrationsschritt erfordern. Die Verfahren bestehen aus eingebetteten Formeln benachbarter Ordnung für die Lösung und ihre erste Ableitung. Dies erlaubt eine Schrittweitenstrategie, die auf Fehlerschätzungen aller Komponenten der numerischen Lösung beruht. Aus jeder Methodenfamilie wurde ein Mitglied ausgewählt, von dem gute numerische Eigenschaften erwartet werden können. Einige Vergleiche dieser spezifischen neuen Methoden mit herkömmlichen Runge-Kutta-Methoden wurden vorgenommen und die hier untersuchten Nyström-Methoden scheinen sich mit einigen der besten gegenwärtig im Gebrauch befindlichen Methoden messen zu können.

更新日期：2020-01-01
• Computing (IF 2.063) Pub Date : 1996
Hinrich Holm,Matthias Maischak,Ernst P. Stephan

We study the boundary element method for weakly singular and hypersingular integral equations of the first kind on screens resulting from the Dirichlet and Neumann problems for the Helmholtz equation. It is shown that the hp-version with geometrical refined meshes converges exponentially fast in both cases. We underline our theoretical results by numerical experiments for the pure h-, p-versions, the graded mesh and the hp-version with geometrically refined mesh.ZusammenfassungWir betrachten die Randelementmethode für schwachsinguläre und hypersinguläre Integralgleichungen erster Art auf Schirmen. Die Integralgleichungen sind äquivalent zum Dirichlet- beziehungsweise Neumann-Problem für die Helmholtz-Gleichung im Außengebiet. Es wird gezeigt, daß die hp-Version mit geometrischem Gitter in beiden Fällen exponentiell in Abhängigkeit von den Freiheitsgraden konvergiert. Wir bestätigen unsere theoretischen Ergebnisse durch numerische Experimente für die reine h- und p-Version, für das graduierte Gitter und für die hp-Version mit geometrischem Gitter.

更新日期：2020-01-01
• Computing (IF 2.063) Pub Date : 1970
Jochen W. Schmidt,Dieter Leder

ZusammenfassungZur Lösung von nichtlinearen Gleichungen in normierten Räumen werden Iterationsverfahren angegeben, die Funktionswerte und Steigungen erster Ordnung benutzen. Es ist möglich, die Steigungen durch Ableitungen zu ersetzen. Die linearen Gleichungen, die in jedem Iterationsschritt auftreten, werden nur näherungsweise gelöst. Dabei ist es wichtig, daß hierdurch keine Verringerung der Konvergenzgeschwindigkeit eintritt.SummaryIteration methods are given for the solution of nonlinear equations in normed spaces requiring functionvalues and first-order divided differences. The latter may be replaced by derivations. The linear equations of each iteration step are solved only approximatively but this does not diminish the order of convergence.

更新日期：2020-01-01
• Computing (IF 2.063) Pub Date : 1972
Rudolf Scherer

ZusammenfassungKastlunger-Wanner [2] untersuchten kürzlich Runge-Kutta-Verfahren mit mehrfachen Knoten. Sie stützten sich dabei auf die Methode des Taylor-Abgleichs und verallgemeinerten die Resultate vonButcher [1].Wir erläutern am Beispiel einer Runge-Kutta-Formel der Ordnung vier mit zwei doppelten Knoten, wie sich durch die Einschaltung von passend gewählten Integrationsformeln eine übersichtliche Darstellung des Schrittfehlers herleiten läßt. Außerdem erhält man daraus leicht die Bedingungsgleichungen für die Koeffizienten. Die gewonnene Fehlerschranke hat — im Vergleich zur Fehlerschranke für das gewöhnliche (klassische) Runge-Kutta-Verfahren — einen besonders einfachen Aufbau und ist im Ergebnis in manchen Fällen günstiger.AbstractKastlunger-Wanner [2] recently studied Runge-Kutta methods with multiple nodes. Their paper using Taylor-expansion generalized results ofButcher [1].In this paper we consider a Runge-Kutta method of order four with two double nodes. Inserting appropriate integration formulas we deduce a clearly arranged representation for the truncation error. Moreover one can easily derive the conditions for the coefficients. The obtained error bound has a rather simple form — compared with the error bound for the usual (classical) Runge-Kutta method — and in some cases carries a better result.

更新日期：2020-01-01
Contents have been reproduced by permission of the publishers.

down
wechat
bug