Next Article in Journal
Spectrophotometric Online Detection of Drinking Water Disinfectant: A Machine Learning Approach
Next Article in Special Issue
NFC-Based Wearable Optoelectronics Working with Smartphone Application for Untact Healthcare
Previous Article in Journal
Towards Automated 3D Inspection of Water Leakages in Shield Tunnel Linings Using Mobile Laser Scanning Data
Previous Article in Special Issue
A Textile Sleeve for Monitoring Oxygen Saturation Using Multichannel Optical Fibre Photoplethysmography
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Study of Accelerometer and Gyroscope Measurements in Physical Life-Log Activities Detection Systems

1
Department of Computer Science, Air University, Islamabad 44000, Pakistan
2
Department of Human-Computer Interaction, Hanyang University, Ansan 15588, Korea
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(22), 6670; https://doi.org/10.3390/s20226670
Submission received: 4 October 2020 / Revised: 13 November 2020 / Accepted: 18 November 2020 / Published: 21 November 2020

Abstract

:
Nowadays, wearable technology can enhance physical human life-log routines by shifting goals from merely counting steps to tackling significant healthcare challenges. Such wearable technology modules have presented opportunities to acquire important information about human activities in real-life environments. The purpose of this paper is to report on recent developments and to project future advances regarding wearable sensor systems for the sustainable monitoring and recording of human life-logs. On the basis of this survey, we propose a model that is designed to retrieve better information during physical activities in indoor and outdoor environments in order to improve the quality of life and to reduce risks. This model uses a fusion of both statistical and non-statistical features for the recognition of different activity patterns using wearable inertial sensors, i.e., triaxial accelerometers, gyroscopes and magnetometers. These features include signal magnitude, positive/negative peaks and position direction to explore signal orientation changes, position differentiation, temporal variation and optimal changes among coordinates. These features are processed by a genetic algorithm for the selection and classification of inertial signals to learn and recognize abnormal human movement. Our model was experimentally evaluated on four benchmark datasets: Intelligent Media Wearable Smart Home Activities (IM-WSHA), a self-annotated physical activities dataset, Wireless Sensor Data Mining (WISDM) with different sporting patterns from an IM-SB dataset and an SMotion dataset with different physical activities. Experimental results show that the proposed feature extraction strategy outperformed others, achieving an improved recognition accuracy of 81.92%, 95.37%, 90.17%, 94.58%, respectively, when IM-WSHA, WISDM, IM-SB and SMotion datasets were applied.

1. Introduction

Recent developments in the healthcare industry help patients, especially the elderly, to avoid illness, accidents and disease [1]. Such strategies have introduced monitoring devices such as wearable, vision and marker-based sensors that secure, examine and improve human life in uncertain situations [2,3] while patients remain mobile. Wearable technology has replaced traditional diagnostics by delivering ubiquitous access to vital patient data via smartphones and wearable sensory clothing [4,5]. Wearable devices provide real-time feedback from sensor fusion and they allow for the deployment, analysis and exploitation of the acquired data. Patients, carers and health practitioners can use data gleaned from wearable inertial sensors to keep up-to-date with the health and wellbeing status of their clients. Such data are functional for healthcare industries, where they can be used to improve the living standards of humans through remote monitoring [6,7,8,9] and by providing data for further research and development. Rapid growth in the number of healthcare applications has had a profound impact on the assessment and the evaluation of the fitness models proposed so far [10,11]. This growth has been driven by the development of the Internet and wearable technologies. However, wearable technologies face challenges for life-log monitoring because they lack contextual information. In addition, some limitations, such as unstable human body movements, hardware limitations and ergonomic measurements, adversely affect the precision of devices made for human life-log monitoring and recording [12].
Technological advances in wearable sensors have resulted in increased demand in research for security, healthcare and wellbeing applications such as security and surveillance systems, mental wellness apps, personal trainer assistants, smart homes and assistive robots [13,14,15,16,17,18,19]. In security and surveillance systems, applications that detect uncertain or abnormal events use physical activity monitoring systems that prompt precautionary measures against violent acts [20,21]. In mental wellness, apps help those concerned to make choices towards healthy, comfortable and safe lifestyles and behaviors by being sensitive to emotional and physical indicators. With regard to fitness training, wearable sensor technologies provide motion tracking in order to make training efficacious and efficient [22]. Truly smart home environments provide real-time physical monitoring of children and elderly people who need support due to developing, underdeveloped or deteriorating cognitive skills, respectively. The support offered by such devices enhances the wearer’s functional independence and quality of life. Furthermore, physical activity recognition systems deployed in homes help carers and family members to supervise and respond to elderly patients in a pervasive manner [23,24].
Recently, there has been a great demand for various applications for body-worn sensors. These revolutionary developments have impacted multiple aspects of human life, especially in healthcare and daily life monitoring. Among these wearable sensors, our work mainly focuses on inertial measurement unit (IMU) sensors such as accelerometers, gyroscopes and magnetometers, which enable us to examine human life in different routines and postures in order to detect changes in location, body movement and rotational changes in three-dimensional space [25,26,27]. In addition, healthcare industries make use of these sensors to monitor physiological and physical activities [28,29]. However, these sensors can also be used to sense sudden changes in the wearer’s posture or position, like falling, and this information can be used to help prevent falling and/or to dispatch prompt assistance, especially to the elderly [30,31]. Despite the feasibility of such wearable sensors, some challenges remain, such as the continuous monitoring of the data acquired from the sensors and the volume of data on the system. Such data are difficult to handle in real-time.
This paper mainly focuses on the optimization of healthcare physical activity recognition systems that are intended to reduce difficulties in monitoring human physical routines via IMU-based wearable sensors, which measure the movements, postures and orientations of those wearing the sensors. The proposed physical activity recognition system comprises four main steps: the placement of sensors, a signal denoising process, feature selection and data classification. Initially, we placed three inertial sensors (i.e., accelerometers, gyroscopes and magnetometers) at different body locations (i.e., chest, thigh and wrist). Acquired data were filtered with a third-order median filter to eliminate impulsive types of noise; this restored the signal to close to normal motion. Then, we adapted several approaches of statistical and non-statistical features. For sustained signal data, relevant descriptive features that contribute more to the recognition of human physical activities in unlike conditions were selected. Finally, a reweighted genetic algorithm model was embodied in the model to recognize and set parameters to classify human activities from feature vectors to attain significant accuracy. To evaluate performance, we applied our proposed model to the IM-WSHA dataset, which is based on diverse patterns of physical activities. Simultaneously, we employed the proposed model for two public benchmark datasets: the WISDM and IM-SB datasets. The major contributions of our paper are highlighted as follows:
  • The fusion of multiple features from different domains makes the proposed physical healthcare detection system robust even with noisy data and holds the local dependent properties over the reweighted genetic algorithm acting as a novel methodology in order to improve the recognition rate over all three human activity datasets.
  • For complex human physical activity patterns, we designed a novel genetic algorithm-based pattern matching method that provides contextual data coupled with classifying behaviors.
  • Moreover, a comprehensive analysis was carried out on two public benchmark datasets (WISDM and IM-SB) along with a self-annotated dataset (IM-WSHA) for human physical healthcare activities; this achieved notable results compared with other state-of-the-art methods and deep learning algorithms.
The rest of the paper is organized as follows. Section 2 offers a brief overview of related work in the area of human healthcare activity analysis. Section 3 presents the proposed architecture of our human healthcare activity model. Section 4 comprises the details of one self-annotated dataset and two public benchmark datasets along with the experimental setup and results. Finally, Section 5 presents the conclusions and describes research perspectives.

2. Related Work

2.1. Features-Based Healthcare Activity Recognition Systems Using 2D/3D Cameras

Image processing techniques have been used prolifically to recognize human movement patterns from 2D/3D video and still images. Advances in multimedia tools and sensing devices have made it easier for researchers to track and analyze human positions and postures by employing techniques like foreground segmentation, silhouette extraction, etc. For instance, Liu et al. [32] analyzed human activity recognition (HAR) for healthcare using RGB-Depth cameras. They extracted both 2D and 3D movements, still postures and transition actions. Nonlinear Support Vector Machine (SVM) is applied to classify different human activities. In [33], Crispim et al. proposed a multi-sensor surveillance system for older patients based on video cameras in order to automatically detect life events. The proposed system was tested with nine participants. Their multi-sensors approach shows an improvement over vision-based systems. Zouba et al. [34] utilized a cognitive vision approach to detect and identify daily living activities based on 3D representations of key human postures. They modeled multiple video events in real-time scenarios. However, this approach was only evaluated experimentally on small datasets.
In [35], Wu et al. presented a hierarchical approach to recognize multi-view activities in home environments with different visual features and learning methods. Their proposed method focused on different fusion techniques such as spatio-temporal features, decision methods and feature fusion methods for multi-view activity recognition. Kim et al. [36] analyzed a depth vision-based human activity recognition system for older people’s healthcare in home environments. Their model processed convolving features such as joint distance, magnitude and centroid features, which are used for feature extraction. Furthermore, for classification, they used a Hidden Markov Model (HMM) to recognize various human activities.

2.2. Features-Based Healthcare Activity Recognition System Exploiting Wearable Sensors

Human motion analyses have been strengthened by recent advances in electronics, especially due to the introduction of Micro Electro-Mechanical Systems (MEMS). Micro versions of electronic sensors have added to comfort and adaptability to daily routine motion detection. In their efforts to design human motion instruments, researchers have employed a combination of sensors to find better solutions for analyzing skeletal movements and the quantization of human motion. MEMS sensors (accelerometers, gyroscopes and magnetometers) in particular have been playing a significant role in the recording and analysis of motion data [37]. In [38], Leonardis et al. focused on a multi-featured technique to recognize eight human activities with magnetic and inertial measurement unit (MIMU) sensors. They processed all signals from a tri-axial set of sensors: accelerometer, gyroscope and magnetometer. In addition to feature extraction, appropriate features were selected to maximize performance. Finally, state-of-the-art classifiers were applied to evaluate the benchmark performance. Zebin et al. [39] presented deep learning techniques for a human activity recognition system using body-worn inertial sensors. They presented feature learning methods for their activity recognition system. In addition, convolutional neural network (CNN) architecture is applied to automate feature learning for the recognition of different activities. In [40], Margarito et al. assessed the dynamics of human motion by placing an accelerometer sensor on the user’s wrist. They captured motion data representing eight healthcare patterns which were compared against existing pattern templates using Euclidean distance and dynamic time warping (DTW). In addition, statistical learning approaches were applied for the segregation of quantized motion data.
Xu et al. [41] proposed a novel method of activity recognition using wearable sensors by combining three IMUs and one heart rate sensor. These sensor streams were subjected to multi-feature extraction from Hilbert–Huang transform to enhance human activity recognition. In [42], Jalal et al. dealt with accelerometer signals statistically and applied a linear support machine to classify quantized human motion data produced from accelerometer sensors. Nweke et al. [43] conducted research to analyze human activity recognition and health monitoring using multi-sensor fusion, namely accelerometers and gyroscopes on two public benchmark datasets. The authors critically analyzed multiple techniques for data fusion, feature selection and classification techniques for human physical activity recognition via inertial-based sensors.

3. Materials and Methods

3.1. Overview of the Solution Framework

The proposed system recognizes the physical healthcare activities of humans via three inertial measurement units: accelerometer, gyroscope and magnetometer sensors. The proposed model’s architecture is elicited in Figure 1. The process is divided into four phases: signal preprocessing, feature extraction, feature selection evaluation and a genetic algorithm-based classifier. Initially, the sensor data are rectified by applying a filtering scheme to deal with the noisy peaks resulting from abrupt movements. These signals are further processed to compensate for delays introduced as a result of signal filtering. Secondly, the smoothened signal values are arranged into time-blocks of consistent duration for the extraction of signal features. In feature extraction, a set of descriptive statistical and non-statistical feature vectors are extracted such that the signal data are represented by minimal possible information. Moreover, the extracted features are normalized with the help of extremes to hinder any complex value from occurring in the later stages of feature evaluation and selection. The feature extraction phase is then followed by feature selection. Thirdly, feature selection is a compression technique applied to feature vectors such that the contributing features are maintained for the later stages of data evaluation. The contributive nature of a feature is defined by the threshold that is calculated as a mean of the previous evaluations. Lastly, the processed signal data and selected features are supplied to the classifier algorithm, which assesses the signal stream and applies only the compressed feature set for training and testing the model.

3.2. Signal Pre-Processing

For any system involving signal analysis, pre-processing is the key to maintaining the shape of the data. In the proposed model, accelerometer signals are processed by a moving filter to enhance the signal and smoothen extreme points. Moreover, these smoothened signal streams are normalized for the abduction of negative values from the system to avoid the occurrence of non-real values in the feature extraction phases. The processed and noisy signal components of the median and moving average filters of the accelerometer can be seen in Figure 2.

3.3. Feature Extraction

Feature extraction is a prominent phase in all machine learning systems, where the emphasis lies on representing information with a meaningful set of attributes that covers the whole scenario. In our proposed model, statistical features have been used to facilitate the analysis of accelerometer signals. The denoised signals are taken as a stream and subjected to feature extraction for the sensor data stream. Initially, the definitive parameters are considered, namely window selection and signal overlap region [44]. Furthermore, signal attributes are extracted from within the bounding region with sufficient contextual information. Algorithm 1 explains the multi-fused feature extraction model. Figure 3 shows the vectorization of features with statistical features.
In the first algorithm, we explained how the inertial raw signal (accelerometer, gyroscope and magnetometer) data are acquired. Then, we applied the third-order median filter technique to remove noise and restore the shape of the signal to near normal motions. Then, we adapted the sliding window approach, which consists of splitting the inertial-based sensor data into batches of equal size for the analysis of human motion patterns. Finally, we acquired the framed data to extract statistical, frequency and acoustic features. Then, we combined all the features into a vector/matrix for further processing.
Algorithm 1: Multi-fused inertial signal (acc, gyro, mag) feature extraction
Input: acc = acceleration data (x,y,z), gyro = gyroscope data (x,y,z), mag = magnetometer data (x,y,z), WS = window size and SR= sampling rate (100 Hz).
Output: feature vector for physical healthcare activities (PHA).
feature_vector ← []
window_dimension ← AcquireWindow_dimension ()/* acquire window size of inertial signal */
over_lap ← Acquirelap_time()      /* Get overlapping time */
 
Method PHA(IMU(acc,gyro,mag))
Multi-FusedVector <- []    /* Combine inertial signal data for preprocessing*/
Filtered_Data <- MovingAverageFilter(acc, gyro, mag)
/*acquire frame data from filtered data(sampled and windowed) */
Frame_Data(Filtered_Data, SR, WS)
While exit condition not true do
/* Extracting statistical, frequency, and acoustic features */
 statistical_features <- ExtractStatisticalFeatures(Frame_Data)/* extract statistical features */
frequency_features <- ExtractFrequencyFeatures(Frame_Data)/*extract frequency-based features */
 Acoustic_features <- ExtractAcousticFeatures(Frame_Data)/* extract acoustic features */
/* appending all above calculated features into one vector */
 Multi-FusedVector <- [statistical_features, frequency_features, acoustic_features]
end while
return Multi-FusedVector

3.3.1. The Signal Magnitude Feature

Regarding the signal magnitude feature Sig(mag), we measure the distance between actual points i of the coordinate signal points within the period to perceived different activities as
Sig ( mag ) = x i 2 + y i 2 + z i 2
where x(i) is the actual point value of signal x, for y(i) and z(i) of each windowing signal.

3.3.2. The Zero Crossing Rate Feature

The zero crossing rate (ZCR) is the measure of a signal interchange having an amplitude from the negative to the positive region and vice versa. A count of zero crossing rate gives a good insight into the signal variation with respect to changing time. Significantly, ZCR is used to measure the measuring pitch of our inertial signal analysis, as shown in Figure 4.

3.3.3. Peak Features

The peak signal feature Sig(mine) is extracted from triaxial components by measuring the actual acceleration component in order to find the minimum and maximum values in the respective sequences of signals:
Sig   ( min e )   =   min   ( e   ( p   <   Q s )   )
Sig   ( max e )   =   max   ( e   ( p   <   Q s )   )
where e represents the signal types, i.e., x, y, z of the acceleration signal, p is the current value, and Qs provides quartile values having negative peak.

3.3.4. Standard Deviation Feature

In the standard deviation feature, we measure the possible deviation of acceleration signals and the mean value from respective sequences of signals. Thus, the sequence obtained as a result of possible deviation is given as
Sig ( std ) = i = 1 n ( X i X ¯ ) 2 / n 1
where Xi is the value of the processed signal. In Figure 5, the varying acceleration is presented against the mean having dispersion around the total mean. Thus, the closer the data around the mean, the more likely the chance of obtaining the standard deviation as a good predictor.

3.3.5. Magnitude Area Feature

The signal magnitude area, which is calculated according to Equation (5), is used to derive a measure of the subject’s level of activity. It can distinguish between periods of acceleration and non-acceleration (static periods) thus:
Area ( mag ) = i = 1 n X i + i = 1 n Y i + i = 1 n Z i
where Xi, Yi, and Zi indicate the acceleration signal along with the x-axis, y-axis, and z-axis, respectively.

3.3.6. Mean Feature

The mean is a statistical feature and an important ingredient in many other features, providing an intuition into the signal’s overall energy over the course of time. Features like standard deviation, variance and zero crossing are totally reliant on the mean for the calculation of these features. Figure 6 represents a 1D plot with a fusion of different statistical features of the ascending motion pattern using the WISDM dataset.

3.3.7. Spectral Entropy

Spectral entropy is used as a feature to describe the complexity of a system. This complexity of the system provides vital information which helps to determine the spectrum of inertial signals [45]. It also helps to determine the Power Spectral Density of inertial signals. In addition, spectral entropy handles the normalized power distribution of that signal in the frequency domain and determines its Shannon entropy. This feature is useful in finding any uncertain peaks, e.g., sudden falls that occur during normal motion. Spectral entropy of the inertial signal for the frequency band f 1 f 2 is represented thus:
S N ( f 1 , f 2 )   =   f i = f 1 f 2     P ( f i ) log   ( P ( f i ) ) log   ( N [ f 1 , f 2 ] )
where P   ( f i ) shows the value of frequency f i , in Power Spectral Density. Furthermore, N [ f 1 - f 2 ] is the number of frequency components in the denoting band during Power Spectral Density determination. Figure 7 visualizes the spectral entropy for the hair brushing motion pattern using the IM-WSHA dataset.

3.3.8. Hilbert–Huang Transform (HHT)

Hilbert–Huang transform (HHT) is useful for dealing with varied signal data [46]. In this paper, we employ HHT to analyze inertial signals to deal with variable patterns. Hilbert–Huang transform involves the Hilbert transform and Empirical Mode Decomposition (EMD). EMD plays a vital role in our inertial data. It decomposes the inertial data into a fixed and small number of IMFs called intrinsic mode functions [47]. These IMFs are used to extract features and they are pooled with time-domain values to analyze some statistical patterns. Finally, all these features are used to create a combined feature vector. Figure 8 represents the intrinsic mode function from the inertial data.

3.4. Feature Selection

Due to the high computational cost associated with signal processing, especially when it includes contextual information, the importance of feature selection cannot be underestimated. Genetic algorithm (GA) is an evolutionary strategy that follows the principles of natural selection for the evolution of the next generation. The proposed algorithm uses GA to find a set of features that embrace all the significant information nodes from the inertial signals (see Figure 9). In the assimilation of inter-signal variation(s), the process is led by biological crossover and mutations to bring stochasticity to the process. In addition, features are assigned random weights to help model non-linear behavior in understanding complex signal patterns. Crossover operation is governed by the principle of producing offspring from a set of selected parents. Furthermore, randomization is applied to feature vectors for the fulfillment of mutation operations. Mutation plays a significant role in fast convergence of the algorithm but often leads to a reduced computational cost. Feature selection is an important phenomenon that keeps the population pool filled with a mix of the fittest average feature sets. Algorithm 2 represents the genetic algorithm based on a reweighted feature selection method.
σ ( X , Y , Z ) = i = 1 n j = 1 w ( X time , X freq , X acoustic Y time , Y freq , Y acoustic Z time , Z freq , Z acoustic )
Equation (7) comprises the feature extraction stage for the inertial signal data, where X(time, freq, acoustic), Y(time, freq, acoustic) and Z(time, freq, acoustic) embody signal streams. Each signal segment is employed to extract the time, frequency and acoustic features.
In the second algorithm, we explained how we acquired the multi-fused feature from a vector. The feature vector is then converted into corresponding chromosomes. Then, these multi-fused features are further processed and reweighted to extract an optimal weight. After extracting optimal weights, we need to calculate crossover chromosomes and global maxima. Finally, we obtain relevant features based on Linear Support Vector Machine (LSVM) and the random forest-based fitness function.
Algorithm 2: Genetic algorithm-based reweighted feature selection
Input: FV: Multi-fused feature vectors (u1, u2, u3,… un) /* acquire feature vector */
Output: FL: Multi-fused feature list (l1, l2, lm) /* obtain vector of optimal features*/
/* feature vectors are converted into corresponding chromosomes */
for vector in populationlab do
/* multi-fused-feature vectors are further processed and reweighted to extract an optimal weight
 RewightedFeatures <- []
while fitness not achieved or fitness not changing do
  for feature in vector do
   ReweightedFeatures (feature)
  end for
  Rechoose ()
  offspring1, offspring2 <- CrossOver (vector) /* calculate crossover chromosomes*/
  mutated <- Mutation (vector)     /* calculate global maxima */
/*obtain relevant features on the basis of Linear Support Vector Machine (LSVM) and random forest-based fitness function*/
  Evaluationfunction <- GetFitness (vector)
end while
return ReweightedFeatures
end for
In Equation (8), the symbol χ represents the process of applying a crossover between two-parent chromosomes C , C ¯ in the feature vector. In Equation (9), the χ illustrates the addition of mutations in crossed children to deal with the same chromosomes. All chromosomes are fixed versions of features to represent each bit as a trait.
χ ( C , C ¯ ) = X c 1 . . X c n , Y c 1 . . Y c n , Z c 1 . . Z c n X × c 1 . X ¯ c n , Y ¯ c 1 . Y ¯ c n , Z ¯ c 1 . Z ¯ c n
C = χ ( C , C ¯ ) = X c 1 . . X c n , Y c 1 . . Y c n , Z c 1 . . Z c n
In order to avoid replication of uniform chromosomes in the population, mutation is introduced to the crossed children also shown in Equation (9). At the start, the mutation rate is set to 0.05 to avoid primitive randomization. In this way, mutation maintains the divergence by adapting another level of randomization with the new generations. In Equation (10), mutations are represented by μ( F ).
μ ( F ) = { X C 1 X C n , Y C 1 Y C n , Z C 1 Z C n Y C 1 Y ^ C n , Z C 1 Z C n , X C 1 X ˙ C n
Finally, we introduce the reweighted genetic algorithm [48], which consists in giving weights to specific features while avoiding others. In this way, we did not need to try all possible combinations of weights, which would increase computation with the conventional genetic algorithm. The weight assignment in GA strengthens the selection and classification process as an output. In Equation (11), W a 1 are defined as weights and a1 is the feature depicted in the chromosomal structure.
R G A   ( C ) =   W a 1 . Y c 1 W a n . Y ^ c n , W b 1 Z c 1 W b n Z ^ c n , W c 1 X c 1 W c n X c n
where ϑ ( C ˙ ) lsvm accounts for the classification result of the support vector machine and ϑ ( C ˙ ) rf are utilized for the random forest accuracy results.
Fitness ( C ) = ϑ ( C ) s v m + ϑ ( C ) r f 2

3.5. Genetic-Based Classifier

Classification refers to the taxonomic grouping of data into respective groups based on similarity. Categorization is achieved by drawing clear-cut boundaries between the classification groups. The genetic algorithm’s evolutionary nature has been used to evolve discriminating separators between different classes by exploiting the differences in feature vectors. In the proposed model, GA has been used to solve complex pattern-matching problems. Motion data require closely related signal patterns that can cope with inter-class similarity. Following the same steps involved in feature selection, the genetic algorithm uses the biological operations of crossover and mutation to shuffle feature vectors until the maximum possible boundary is marked between the classes. Figure 10 shows the reweighted pattern-matching algorithm for human physical healthcare pattern understanding. In Equation (13), we classified the labeled behaviors in inertial signal patterns with high similarity ratio comprising its context as
Fitness ( p )   =   . ( p 1 p 2 p 3 p 4 p 5 p 6 p 7 p 8 p 9 ) % ( p 11 p 12 p 13 p 14 p 15 p 16 p 17 p 18 p 19 p 21 p 22 p 23 p 24 p 25 p 26 p 27 p 28 p 29 p 31 p 32 p 33 p 34 p 35 p 36 p 37 p 38 p 39 p 41 p 42 p 43 p 44 p 45 p 46 p 47 p 48 p 49 p 51 p 52 p 53 p 54 p 55 p 56 p 57 p 58 p 59 p 61 p 62 p 63 p 64 p 65 p 66 p 67 p 68 p 69 p 71 p 72 p 73 p 74 p 75 p 76 p 77 p 78 p 79 p 81 p 82 p 83 p 84 p 85 p 86 p 87 p 88 p 89 p 91 p 92 p 93 p 94 p 95 p 96 p 97 p 98 p 99 )

4. Experimental Results

The proposed system is evaluated by a genetic algorithm-based classifier. It includes one self-annotated Intelligent Media Wearable Smart Home Activities (IM-WSHA) dataset and two public benchmark datasets named WISDM and IM-SB, respectively. These datasets include multiple physical healthcare activities in different indoor/outdoor environments, i.e., smart home, sports ground and public places.

4.1. Intelligent Media Wearable Smart Home Activities Dataset (IM-WSHA)

The Intelligent Media Wearable Smart Home Activities [49] is our self-annotated dataset, which comprises three wearable IMU sensors (MPU-9250). This dataset contains 220 sequences of accelerometer, gyroscope and magnetometer data. These sensors were positioned at the wrist, chest and thigh regions to capture different aspects of human body motion. Ten participants (five males and five females) performed 11 different physical healthcare activities in smart home environments, namely phone conversation, vacuum cleaning, watching TV, using computers, reading books, ironing, walking, exercise, cooking, drinking and brushing hair. The participants involved included both young and old people whose ages ranged between 19 and 60 and whose weight ranged between 55 and 85 kg. The usage of multisensory devices adds challenges when dealing with rigorous motion data [49].
The reweighted genetic algorithm was tested on an IM-WSHA dataset to analyze physical healthcare activity data from different dimensions and was compared with a conventional linear support vector machine (LSVM). Table 1 illustrates the performance matrix of human activity recognition for 11 different activities with a mean accuracy of 81.92%.
To determine the optimal parameters for a reweighted genetic algorithm to function properly, different sliding ratios were accommodated to find the perfect balance between the sliding ratio and the accuracy of the proposed system. It is worth noting that the sliding ratio accounts for the contextual information needed for the later part of the signals. The application of contextual information is a prominent factor which holds the sequence of events as a single chain and allows a better understanding of motion patterns in context. Thus, Table 2 depicts the impact of different sliding ratios for the IM-WSHA dataset. It is clearly observed that the proposed approach resulted in average results, especially for non-repetitive activities like phone conversations, watching TV, using computers, reading books and cooking. These five physical activities are movements without repetition, which causes lower accuracy compared to other activities. On the other hand, activities such as walking, exercise, vacuum cleaning, ironing, brushing hair and drinking movements, with repetition in terms of the subject’s body movements, produce high accuracy rates.
In Table 3, we tested different state-of-the-art methods using our IM_WSHA dataset. In [50], we extracted statistical features from our dataset and then classified the physical activities with multilayer feedforward neural networks and achieved 73.27% accuracy. In [51], we applied decision trees to our dataset and achieved 78.19% recognition accuracy. In Attal et al. [52], we extracted both time and frequency domain features fused with the Hidden Markov Model (HMM) on our IM-WSHA dataset and we achieved 80.37% recognition accuracy. Finally, we applied the proposed model to our self-annotated dataset and achieved significant recognition accuracy of 81.92%.

4.2. WISDM Dataset

The Wireless Sensor Data Mining (WISDM) [54] dataset is a large repository of smartphone-based motion data which involves transient motion data. The dataset accommodates routine motion patterns with a significant number of processable motion samples. The daily life routines possessed by the dataset could possibly be used to analyze the movement of the different body components of the elderly. In the analysis of body positioning for the very elderly, the postural positioning and coherence between body parts can be achieved by recording the quantized motion. With the usage of built-in smartphone sensors, the subject’s movements are translated into acceleration signals. These signals can be used for the identification of a subject’s movements in daily life routines. The WISDM dataset involves six main motion patterns, i.e., walking, jogging, ascending, descending, sitting and standing. For a balanced ratio of sensor data, a sampling frequency of 50ms was used to stream acceleration signals.
The proposed reweighted genetic algorithm was applied to the WISDM dataset to analyze the performance of the proposed dataset. Moreover, a linear support vector machine (LSVM) and random forest were used as second and third classifiers to check physical activity performance, as shown in Table 4.
As Table 5 shows, 10%, 30% and 60% sliding windows were applied to the signal streams and corresponding results were produced. The 10% sliding window produced only average results in terms of accuracy because less consideration was given to the context, but the 30% and 60% sliding windows produced significantly higher results. Therefore, our proposed strategy adopted a 60% overlapping sliding window.
Similarly, the classification results not only reveal the non-contributive features, but they also assign weights to the prominent contributive features. With the usage of weights, non-linearity is introduced into the model, bringing more flexibility to our understanding of motion patterns that involve a high level of variability in terms of axial signals. In the assessment of weights assigned to different features, the results of some trials are presented in Table 6; these show the weighted values for each feature according to the trials performed. Here, Table 6 shows the correlation feature that contributes far less than any other feature.
The performance results of the proposed work are compared in Table 7, where the reweighted genetic algorithm excels beyond other state-of-the-art models. These algorithm models involved learning systems as well as convolutional neural networks. However, in terms of dealing with variability, our proposed system results are slightly better than those of state-of-the-art models.

4.3. IM-SB Dataset

The Intelligent Media Sporting [58] dataset is a multisensory accelerometer-based dataset. The IM-SB dataset involves quantized motion data for physical sporting activities, i.e., badminton, basketball, cycling, football, skipping and table tennis. The dataset involves accelerometer sensors attached to the wrist, thigh and back of the subject to analyze movement from different dimensions. In Table 7, the reweighted genetic algorithm was tested on an IM-SB dataset to analyze sporting motion data from different dimensions and was compared with conventional LSVM.
The controlling parameters for the reweighted genetic algorithm were also modified for the IM-SB dataset in order to check its performance. Again, the proposed accuracy applying the 60% sliding ratio suggested the adoption of a slightly bigger sliding window. Moreover, 10% and 30% contextual information failed to identify the movement from the later part of the signals. Table 8 shows the impact of varying sliding ratios for the IM-SB dataset.
In the pursuit of better parameters, the algorithm was run on several occasions to find the optimum measure of weights assigned to each attribute. Table 9 shows the reweighted values of the features in different trials.
Table 10 shows results for tests on different state-of-the-art methods against the IM-SB dataset. In [59], we classified six sporting behaviors with multiclass AdaBoost and achieved 73.67% accuracy. In Politi et al. [60], we extracted statistical and physical features from the IM-SB dataset and classified it using support vector machine (SVM), which achieved 78.41% recognition accuracy. In [61], we extracted statistical features fused with Multilayer Perceptron (MLP) from an IM-SB dataset and we achieved 87.38% recognition accuracy. Finally, we applied the proposed model to the IM-SB dataset and achieved a significant recognition accuracy rate of 90.17%. The performance results of the proposed work are compared in Table 11.

4.4. The Wearable Inertial Measurement (SMotion) Dataset

The Wearable Inertial Measurement Sensors (SMotion) [62] dataset is an inertial (SHIMMER3) based dataset. The dataset involves a SHIMMER device attached to the waist of the subject to capture devised motion patterns from different aspects of the body. These sensors are positioned at the wrist to capture different dynamics of human body motion. In total, 114 healthy subjects performed three different daily physical activities, i.e., walking, standing still and sitting down and getting up (out of a chair). The performance results of the SMotion dataset are compared in Table 12.
The reweighted genetic algorithm was tested on our four datasets to analyze physical healthcare activities from different dimensions and was compared with a conventional linear support vector machine (LSVM) and random forest, which achieved 94.58%, 91.45% and 94.58% accuracy, respectively.
Table 13 shows that 10%, 30% and 60% sliding windows were applied to the inertial signal streams and presents the corresponding results.

5. Conclusions

In this paper, we have reported the development of a robust approach that can precisely report the physical health and wellbeing status in four challenging benchmark datasets in both indoor and outdoor environments. In addition, we developed a novel framework which is comprised of statistical features, frequency features, transform and acoustic features to extract optimal features to detect and recognize human physical health and wellbeing via a triaxial set of inertial signals: accelerometer, gyroscope and magnetometer.
Furthermore, we presented a robust reweighted genetic algorithm that gives the variation of genetic information and fusion of windowed signal patterns, which helps us to understand random human physical activities that may relate to the subject’s health and wellbeing status. Our system includes data analysis, monitoring and signal inertial measurements as well as efficient feature extraction algorithms which can potentially outperform the recognition accuracy rates of other systems. The proposed system provides remarkable results compared to state-of-the-art systems.
In future work, we will further enhance the efficiency of our features by adding angular and displacement information in order to classify more complex daily physical healthcare activities, especially for older and impaired people.

Author Contributions

Conceptualization: M.A.K.Q. and S.B.u.d.T.; methodology: M.A.K.Q. and A.J.; software: M.A.K.Q.; validation: M.A.K.Q. and S.B.u.d.T.; formal analysis: A.J. and K.K.; resources: A.J. and K.K.; writing—review and editing: A.J. and K.K.; funding acquisition: A.J. and K.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (no. 2018R1D1A1A02085645).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zebin, T.; Scully, P.J.; Ozanyan, K.B. Evaluation of supervised classification algorithms for human activity recognition with inertial sensors. In Proceedings of the 2017 IEEE SENSORS, Glasgow, UK, 29 October–1 November 2017; pp. 1–3. [Google Scholar]
  2. Hachaj, T. Improving Human Motion Classification by Applying Bagging and Symmetry to PCA-Based Features. Symmetry 2019, 11, 1264. [Google Scholar] [CrossRef] [Green Version]
  3. Susan, S.; Agrawal, P.; Mittal, M.; Bansal, S. New shape descriptor in the context of edge continuity. CAAI Trans. Intell. Technol. 2019, 4, 101–109. [Google Scholar] [CrossRef]
  4. Wang, Y.; Cang, C.; Yu, H. A review of sensor selection, sensor devices and sensor deployment for wearable sensor-based human activity recognition systems. In Proceedings of the 10th International Conference on Software, Knowledge, Information Management & Applications, Chengdu, China, 15–17 December 2016; pp. 250–257. [Google Scholar]
  5. Shokri, M.; Tavakoli, K. A review on the artificial neural network approach to analysis and prediction of seismic damage in infrastructure. Int. J. Hydromechatronics 2019, 4, 178–196. [Google Scholar] [CrossRef]
  6. Tao, W.; Liu, T.; Zheng, R.; Feng, H. Gait Analysis Using Wearable Sensors. Sensors 2012, 12, 2255–2283. [Google Scholar] [CrossRef] [PubMed]
  7. Tingting, Y.; Junqian, W.; Lintai, W.; Yong, X. Three-stage network for age estimation. CAAI Trans. Intell. Technol. 2019, 4, 122–126. [Google Scholar] [CrossRef]
  8. Wiens, T. Engine speed reduction for hydraulic machinery using predictive algorithms. Int. J. Hydromechatronics 2019, 1, 16–31. [Google Scholar] [CrossRef]
  9. Zhang, M.; Sawchuk, A.A. Motion primitive-based human activity recognition using a bag-of-features approach. In Proceedings of the ACM SIGHIT International Health Informatics Symposium, Miami, FL, USA, 28–30 January 2012; pp. 631–640. [Google Scholar]
  10. Malik, M.N.; Azam, M.A.; Ehatisham-Ul-Haq, M.; Ejaz, W.; Khalid, A. ADLAuth: Passive authentication based on activity of daily living using heterogeneous sensing in smart cities. Sensors 2019, 19, 2466. [Google Scholar] [CrossRef] [Green Version]
  11. Jalal, A.; Sarif, N.; Kim, J.T.; Kim, T.S. Human activity recognition via recognized body parts of human depth silhouettes for residents monitoring services at smart homes. Indoor Built Environ. 2013, 22, 271–279. [Google Scholar] [CrossRef]
  12. Osterland, S.; Weber, J. Analytical analysis of single-stage pressure relief valves. Int. J. Hydromechatronics 2019, 2, 32–53. [Google Scholar] [CrossRef]
  13. Mahmood, M.; Jalal, A.; Sidduqi, M.A. Robust Spatio-Temporal Features for Human Interaction Recognition Via Artificial Neural Network. In Proceedings of the International Conference on Frontiers of Information Technology (FIT), Islamabad, Pakistan, 17–19 December 2018. [Google Scholar] [CrossRef]
  14. Lara, O.D.; Labrador, M.A. A survey on human activity recognition using wearable sensors. IEEE Commun. Surv. Tutor. 2013, 15, 1192–1209. [Google Scholar] [CrossRef]
  15. Zhu, C.; Miao, D. Influence of kernel clustering on an RBFN. CAAI Trans. Intell. Technol. 2019, 4, 255–260. [Google Scholar] [CrossRef]
  16. Chung, S.; Lim, J.; Noh, K.J.; Kim, G.; Jeong, H. Sensor data acquisition and multimodal sensor fusion for human activity recognition using deep learning. Sensors 2019, 19, 1716. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  17. Ahmed, A.; Jalal, A.; Kim, K. RGB-D Images for Object Segmentation, Localization and Recognition in Indoor Scenes using Feature Descriptor and Hough Voting. In Proceedings of the IBCAST, Islamabad, Pakistan, 14–18 January 2020. [Google Scholar] [CrossRef]
  18. Davila, J.C.; Cretu, A.-M.; Zaremba, M. Wearable Sensor Data Classification for Human Activity Recognition Based on an Iterative Learning Framework. Sensors 2017, 17, 1287. [Google Scholar] [CrossRef] [PubMed]
  19. Jalal, A.; Mahmood, M. Students’ Behavior Mining in E-learning Environment Using Cognitive Processes with Information Technologies. Educ. Inf. Technol. 2019, 24, 2797–2821. [Google Scholar] [CrossRef]
  20. Nurhanim, K.; Elamvazuthi, I.; Izhar, L.I.; Ganesan, T. Classification of Human Activity based on Smartphone Inertial Sensor using Support Vector Machine. In Proceedings of the 2017 IEEE 3rd International Symposium in Robotics and Manufacturing Automation (ROMA), Kuala Lumpur, Malaysia, 19–21 September 2017. [Google Scholar]
  21. Jalal, A.; Kim, Y.H.; Kim, Y.J.; Kamal, S.; Kim, D. Robust human activity recognition from depth video using spatiotemporal multi-fused features. Pattern Recognit. 2017, 61, 295–308. [Google Scholar] [CrossRef]
  22. Qin, Z.; Zhang, Y.; Meng, S.; Qin, Z.; Choo, K.-K.R. Imaging and fusing time series for wearable sensor-based human activity recognition. Inf. Fusion 2020, 53, 80–87. [Google Scholar] [CrossRef]
  23. Mahmood, M.; Jalal, A.; Kim, K. WHITE STAG model: Wise human interaction tracking and estimation (WHITE) using spatio-temporal and angular-geometric (STAG) descriptors. Multimed. Tools Appl. 2020, 79, 6919–6950. [Google Scholar] [CrossRef]
  24. Kańtoch, E. Human activity recognition for physical rehabilitation using wearable sensors fusion and artificial neural networks. In Proceedings of the 2017 Computing in Cardiology (CinC), Rennes, France, 24–27 September 2017; pp. 1–4. [Google Scholar]
  25. Casilari, E.; Álvarez-Marco, M.; García-Lagos, F. A Study of the Use of Gyroscope Measurements in Wearable Fall Detection Systems. Symmetry 2020, 12, 649. [Google Scholar] [CrossRef] [Green Version]
  26. Bonato, P. Wearable sensors/systems and their impact on biomedical engineering. Eng. Med. Biol. Mag. 2003, 22, 18–20. [Google Scholar] [CrossRef]
  27. Rodríguez-Rodríguez, I.; Rodríguez, J.-V.; Elizondo-Moreno, A.; Heras-González, P. An Autonomous Alarm System for Personal Safety Assurance of Intimate Partner Violence Survivors Based on Passive Continuous Monitoring through Biosensors. Symmetry 2020, 12, 460. [Google Scholar] [CrossRef] [Green Version]
  28. Jalal, A.; Kamal, S.; Kim, D. A Depth Video Sensor-Based Life-Logging Human Activity Recognition System for Elderly Care in Smart Indoor Environments. Sensors 2014, 14, 11735–11759. [Google Scholar] [CrossRef] [PubMed]
  29. Mukhopadhyay, S.C. Wearable sensors for human activity monitoring: A review. IEEE Sens. J. 2015, 15, 1321–1330. [Google Scholar] [CrossRef]
  30. Yin, J.; Yang, Q.; Pan, J.J. Sensor-based abnormal human-activity detection. IEEE Trans. Knowl. Data Eng. 2008, 20, 1082–1090. [Google Scholar] [CrossRef]
  31. Jalal, A.; Kamal, S.; Kim, D.S. Detecting Complex 3D Human Motions with Body Model Low-Rank Representation for Real-Time Smart Activity Monitoring System. KSII Trans. Int. Inf. Syst. 2018, 12. [Google Scholar] [CrossRef] [Green Version]
  32. Liu, X.; Liu, L.; Simske, S.J.; Liu, J. Human Daily Activity Recognition for Healthcare Using Wearable and Visual Sensing Data. In Proceedings of the IEEE International Conference on Healthcare Informatics (ICHI), Chicago, IL, USA, 4–7 October 2016; pp. 24–31. [Google Scholar]
  33. Crispim-Junior, C.F.; Bremond, F.; Joumier, V. A multi-sensor approach for activity recognition in older patients. In Proceedings of the International Conference on Ambient Computing, Applications, Services and Technologies, Barcelona, Spain, 29 September 2012. [Google Scholar]
  34. Zouba, N.; Boulay, B.; Bremond, F.; Thonnat, M. Monitoring Activities of Daily Living (ADLs) of Elderly Based on 3D Key Human Postures. In Proceedings of the International Workshop on Cognitive Vision (ICVW); Springer: Berlin/Heidelberg, Germany, 2008; pp. 37–50. [Google Scholar]
  35. Wu, C.; Khalili, A.H.; Aghajan, H. Multiview activity recognition in smart homes with spatio-temporal features. In Proceedings of the Fourth ACM/IEEE International Conference on Distributed Smart Cameras, Atalanta, GA, USA, 31 August–4 September 2010; pp. 142–149. [Google Scholar]
  36. Kim, K.; Jalal, A.; Mahmood, M. Vision-Based Human Activity Recognition System Using Depth Silhouettes: A Smart Home System for Monitoring the Residents. J. Electr. Eng. Technol. 2019, 14, 2567–2573. [Google Scholar] [CrossRef]
  37. Tian, Y.; Wang, X.; Chen, W.; Liu, Z.; Li, L. Adaptive multiple classifiers fusion for inertial sensor based human activity recognition. Clust. Comput. 2018, 22, 8141–8154. [Google Scholar] [CrossRef]
  38. Leonardis, G.; Rosati, S.; Balestra, G.; Agostini, V.; Panero, E.; Gastaldi, L.; Knaflitz, M. Human Activity Recognition by Wearable Sensors: Comparison of different classifiers for real-time applications. In Proceedings of the 2018 IEEE International Symposium on Medical Measurements and Applications (MeMeA), Rome, Italy, 11–13 June 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1–6. [Google Scholar]
  39. Zebin, T.; Scully, P.; Ozanyan, K. Human activity recognition with inertial sensors using a deep learning approach. In Proceedings of the 2016 IEEE SENSORS, Orlando, FL, USA, 30 October–3 November 2016; pp. 1–3. [Google Scholar]
  40. Margarito, J.; Helaoui, R.; Bianchi, A.M.; Sartor, F.; Bonomi, A.G. User-Independent Recognition of Sports Activities from a Single Wrist-Worn Accelerometer: A Template-Matching-Based Approach. IEEE Trans. Biomed. Eng. 2016, 63, 788–796. [Google Scholar] [CrossRef]
  41. Xu, H.; Liu, J.; Hu, H.; Zhang, Y. Wearable Sensor-Based Human Activity Recognition Method with Multi-Features Extracted from Hilbert-Huang Transform. Sensors 2016, 16, 2048. [Google Scholar] [CrossRef] [Green Version]
  42. Jalal, A.; Quaid, M.A.K.; Hasan, A.S. Wearable sensor-based human behavior understanding and recognition in daily life for smart environments. In Proceedings of the 2018 International Conference on Frontiers of Information Technology (FIT), Islamabad, Pakistan, 17–19 December 2018; pp. 105–110. [Google Scholar]
  43. Nweke, H.F.; Teh, Y.W.; Mujtaba, G.; Al-garadi, M.A. Data fusion and multiple classifier systems for human activity detection and health monitoring. Inf. Fusion 2019, 46, 147–170. [Google Scholar] [CrossRef]
  44. Ermes, M.; Parkka, J.; Cluitmans, L. Advancing from offline to online activity recognition with wearable sensors. In Proceedings of the Thirtieth Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Vancouver, BC, Canada, 20–25 August 2008; pp. 4451–4454. [Google Scholar]
  45. Cao, L.; Wang, Y.; Zhang, B.; Jin, Q.; Vasilakos, A.V. GCHAR: An efficient Group-based Context—Aware human activity recognition on smartphone. J. Parallel Distrib. Comput. 2018, 118, 67–80. [Google Scholar] [CrossRef]
  46. Zong, C.; Chetouani, M. Hilbert-Huang transform based physiological signals analysis for emotion recognition. In Proceedings of the IEEE International Symposium on Signal Processing and Information Technology, Ajman, UAE, 14–17 December 2010; pp. 334–339. [Google Scholar]
  47. Jerritta, S.; Murugappan, M.; Wan, K.; Yaacob, S. Emotion recognition from electrocardiogram signals using Hilbert Huang Transform. In Proceedings of the 2012 IEEE Conference on Sustainable Utilization and Development in Engineering and Technology, STUDENT 2012—Conference Booklet, Kuala Lumpur, Malaysia, 6–9 October 2012. [Google Scholar]
  48. Quaid, M.A.K.; Jalal, A. Wearable sensors based human behavioral pattern recognition using statistical features and reweighted genetic algorithm. Multimed. Tools Appl. 2020, 79, 6061–6083. [Google Scholar] [CrossRef]
  49. Intelligent Media Center (IMC). Available online: http://portals.au.edu.pk/imc/Pages/Datasets.aspx (accessed on 10 August 2020).
  50. Jalal, A.; Uddin, M.Z.; Kim, T.-S. Depth video-based human activity recognition system using translation and scaling invariant features for life logging at smart home. IEEE Trans. Consum. Electron. 2012, 58, 863–871. [Google Scholar] [CrossRef]
  51. Yang, J.-Y.; Wang, J.-S.; Chen, Y.-P. Using acceleration measurements for activity recognition: An effective learning algorithm for constructing neural classifiers. Pattern Recog. Lett. 2008, 29, 2213–2220. [Google Scholar] [CrossRef]
  52. Bonomi, A.G.; Goris, A.; Yin, B.; Westerterp, K.R. Detection of type, duration, and intensity of physical activity using an accelerometer. Med. Sci. Sports Exerc. 2009, 41, 1770–1777. [Google Scholar] [CrossRef] [PubMed]
  53. Attal, F.; Mohammed, S.; Dedabrishvili, M.; Chamroukhi, F.; Oukhellou, L.; Amirat, Y. Physical Human Activity Recognition Using Wearable Sensors. Sensors 2015, 15, 31314–31338. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  54. Kwapisz, J.R.; Weiss, G.M.; Moore, S.A. Activity Recognition using Cell Phone Accelerometers. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (sensorKDD-2010), Washington, DC, USA, 25–28 July 2010. [Google Scholar]
  55. Abdallah, Z.S.; Gaber, M.M.; Srinivasan, B.; Krishnaswamy, S. Adaptive mobile activity recognition system with evolving data streams. Neurocomputing 2015, 150, 304–317. [Google Scholar] [CrossRef]
  56. Dungkaew, T.; Suksawatchon, J.; Suksawatchon, U. Impersonal smartphone-based activity recognition using the accelerometer sensory data. In Proceedings of the 2017 IEEE International Conference on Information Technology (INCIT), Nakhonpathom, Thailand, 2–3 November 2017; pp. 1–6. [Google Scholar]
  57. Ignatov, A. Real-time human activity recognition from accelerometer data using Convolutional Neural Networks. Appl. Soft Comput. 2018, 62, 915–922. [Google Scholar] [CrossRef]
  58. Jalal, A.; Quaid, M.A.; Sidduqi, M. A Triaxial acceleration-based human motion detection for ambient smart home system. In Proceedings of the 2019 16th International Bhurban Conference on Applied Sciences and Technology (IBCAST), Islamabad, Pakistan, 8–12 January 2019; pp. 353–358. [Google Scholar]
  59. Reiss, A.; Stricker, D.; Hendeby, G. Confidence-based multiclass AdaBoost for physical activity monitoring. In Proceedings of the 17th Annual International Symposium on International Symposium on Wearable Computers, Zurich, Switzerland, 8–12 September 2013; pp. 13–20. [Google Scholar]
  60. Politi, O.; Mporas, L.; Megalooikonomou, V. Human motion detection in daily activity tasks using wearable sensors. In Proceedings of the 2014 IEEE 22nd International Conference on European Signal Processing (EUSIPCO), Lisbon, Portugal, 1–5 September 2014; pp. 2315–2319. [Google Scholar]
  61. Yin, X.; Shen, W.; Samarabandu, J.; Wang, X. Human activity detection based on multiple smart phone sensors and machine learning algorithms. In Proceedings of the 2015 IEEE 19th International Conference on Computer Supported Cooperative Work in Design (CSCWD), Calabria, Italy, 6–8 May 2015; pp. 582–587. [Google Scholar]
  62. Nadeem, A.; Mehmood, A.; Rizwan, K. A dataset build using wearable inertial measurement and ECG sensors for activity recognition, fall detection and basic heart anomaly detection system. Data Brief 2019, 27, 104717. [Google Scholar] [CrossRef]
Figure 1. Flow architecture of the proposed physical healthcare detection system.
Figure 1. Flow architecture of the proposed physical healthcare detection system.
Sensors 20 06670 g001
Figure 2. Signal preprocessing for wearable accelerometers in the proposed healthcare model.
Figure 2. Signal preprocessing for wearable accelerometers in the proposed healthcare model.
Sensors 20 06670 g002
Figure 3. Features vectorized representation of inertial sensor stream components x, y, z.
Figure 3. Features vectorized representation of inertial sensor stream components x, y, z.
Sensors 20 06670 g003
Figure 4. The instantaneous vector magnitude for the walking signal pattern.
Figure 4. The instantaneous vector magnitude for the walking signal pattern.
Sensors 20 06670 g004
Figure 5. The instantaneous vector magnitude for the walking signal pattern from accelerometer sensor.
Figure 5. The instantaneous vector magnitude for the walking signal pattern from accelerometer sensor.
Sensors 20 06670 g005
Figure 6. The three statistical features (mean, min, max) for practicing imitation in the “climbing upstairs” motion pattern using the Wireless Sensor Data Mining (WISDM) dataset.
Figure 6. The three statistical features (mean, min, max) for practicing imitation in the “climbing upstairs” motion pattern using the Wireless Sensor Data Mining (WISDM) dataset.
Sensors 20 06670 g006
Figure 7. Spectral Entropy for brushing hair signal pattern using the Intelligent Media-Wearable Smart Home Activities (IM-WSHA) dataset.
Figure 7. Spectral Entropy for brushing hair signal pattern using the Intelligent Media-Wearable Smart Home Activities (IM-WSHA) dataset.
Sensors 20 06670 g007
Figure 8. The empirical mode decomposition components from the inertial data.
Figure 8. The empirical mode decomposition components from the inertial data.
Sensors 20 06670 g008
Figure 9. The proposed reweighted genetic algorithm for recognizing human motion from inertial-based data.
Figure 9. The proposed reweighted genetic algorithm for recognizing human motion from inertial-based data.
Sensors 20 06670 g009
Figure 10. The reweighted pattern-matching algorithm for human physical healthcare pattern understanding.
Figure 10. The reweighted pattern-matching algorithm for human physical healthcare pattern understanding.
Sensors 20 06670 g010
Table 1. IM-WSHA dataset applied over the proposed classifier and over well-known statistical classifiers.
Table 1. IM-WSHA dataset applied over the proposed classifier and over well-known statistical classifiers.
SymbolsActivitiesLSVM (%)Random Forest (%)Proposed (%)
P1Phone conversation68.3470.0373.67
P2Vacuum cleaning74.3678.3784.78
P3Watching TV66.0869.8472.43
P4Using computers69.3472.6474.57
P5Reading books70.1373.8977.18
P6Ironing84.4388.4391.24
P7Walking86.7189.4193.16
P8Exercise83.7687.6589.78
P9Cooking67.4869.3675.83
P10Drinking75.5780.1486.23
P11Brushing hair73.8778.282.29
Mean Recognition Accuracy74.5577.9981.92
Bold letters for Mean Recognition Accuracy of IM-WSHA dataset.
Table 2. IM-WSHA dataset results for different sliding windows.
Table 2. IM-WSHA dataset results for different sliding windows.
Symbols10% Slide 30% Slide60% Slide
P152.1863.5873.67
P265.2877.3684.78
P356.3866.0872.43
P458.2567.3474.57
P563.7671.1377.18
P668.9279.8091.24
P772.5884.7193.16
P870.4981.6789.78
P959.2969.4175.83
P1069.7780.1286.23
P1163.8675.4282.29
Mean Recognition Accuracy63.70%74.23%81.92%
Table 3. Physical activity recognition accuracy comparison of the proposed method with other state-of-the-art methods over IM-WSHA inertial data.
Table 3. Physical activity recognition accuracy comparison of the proposed method with other state-of-the-art methods over IM-WSHA inertial data.
MethodsAlgorithm DetailsRecognition Accuracy of IM-WSHA
Yang et al. [51]Statistical features fused with multilayer feedforward neural networks73.27%
Bonomi et al. [52]Classification with decision trees78.19%
Attal et al. [53]Time and frequency domain features wrapped with Hidden Markov Model (HMM) classifier80.37%
Proposed WorkStatistical, transform, acoustic and frequency features fused with reweighted genetic algorithm81.92%
Bold letters for Proposed Recognition Accuracy.
Table 4. WISDM dataset results over the proposed classifier and other well-known statistical classifiers.
Table 4. WISDM dataset results over the proposed classifier and other well-known statistical classifiers.
SymbolsActivitiesLSVM (%)Random Forest (%)Proposed (%)
W1Walking91.4293.6498.81
W2Jogging92.2794.1298.47
W3Ascending89.1490.8391.23
W4Descending90.3291.6492.07
W5Sitting92.1892.2793.18
W6Standing93.0694.1898.47
Mean Recognition Accuracy91.3992.7895.37
Bold letters for Mean Recognition Accuracy of WISDM dataset.
Table 5. WISDM dataset results for different sliding windows.
Table 5. WISDM dataset results for different sliding windows.
Symbols10% Slide30% Slide60% Slide
W171.4183.3198.81
W273.6893.4598.47
W381.2395.5691.23
W456.4373.2892.07
W559.5587.6593.18
W675.2382.9498.47
Mean Recognition Accuracy69.58%86.03%95.37%
Table 6. Feature weights for different trails in the WISDM dataset.
Table 6. Feature weights for different trails in the WISDM dataset.
Feature TypeTrail 1Trail 2Trail 3Trail 4Trail 5
Zero Crossing Rate0.810.750.690.900.85
Fundamental Frequency0.780.950.880.740.39
Signal Magnitude Area0.580.520.710.610.57
Signal Energy0.660.680.630.610.68
Mean0.780.690.750.740.76
Median0.660.540.690.680.71
Mode0.230.180.400.160.18
Standard Deviation0.100.080.940.010.29
Variance0.890.920.010.850.79
Phase Angle0.480.400.390.380.41
Correlation0.1000.0500.06
Min0.760.690.690.570.60
Max0.610.620.600.690.60
Table 7. Comparison results of the proposed method over other state-of-the-art methods using the WISDM dataset.
Table 7. Comparison results of the proposed method over other state-of-the-art methods using the WISDM dataset.
MethodsAccuracy (%)
Star with learning [55]71.20
Impersonal Smartphone-based Activity Recognition (ISAR) [56]75.21
Ignatov’s Convolutional Neural Network (CNN) [57]93.32
Proposed Work95.37
Bold letters for Proposed Method Recognition Accuracy of WISDM dataset.
Table 8. The Intelligent Media-Sporting Behavior (IM-SB) dataset results over the proposed classifier and other well-known statistical classifiers.
Table 8. The Intelligent Media-Sporting Behavior (IM-SB) dataset results over the proposed classifier and other well-known statistical classifiers.
SymbolsActivitiesLSVM (%)Random Forest (%)Proposed (%)
S1Badminton68.1677.8384.21
S2Basketball70.1180.7887.19
S3Cycling95.1384.4593.26
S4Football78.3581.8786.69
S5Skipping87.1686.1494.43
S6Table Tennis91.1189.4895.24
Mean Recognition Accuracy81.6783.4290.17
Bold letters for Mean Recognition Accuracy of IM-SB dataset.
Table 9. IM-SB dataset results for different sliding windows ratios.
Table 9. IM-SB dataset results for different sliding windows ratios.
Symbol10% Slide30% Slide60% Slide
S130.2934.2384.21
S234.2668.1987.19
S344.1878.2493.26
S436.1070.9286.69
S578.2680.1394.43
S670.2281.1195.24
Mean Recognition Accuracy48.88%68.80%90.17%
Table 10. Feature weights for different trials with the IM-SB dataset.
Table 10. Feature weights for different trials with the IM-SB dataset.
Feature TypeTrail 1Trail 2Trail 3Trail 4Trail 5
Zero Crossing Rate0.640.720.810.900.85
Fundamental Frequency0.710.790.760.740.46
Signal Magnitude Area0.810.760.560.610.58
Signal Energy0.340.540.900.610.68
Mean0.760.860.810.740.24
Median0.550.490.670.680.87
Mode0.360.130.450.160.18
Standard Deviation0.220.050.940.010.39
Variance0.780.230.080.850.68
Phase Angle0.500.510.390.380.41
Correlation0.050.020.050.020.06
Min0.670.710.690.530.55
Max0.580.240.600.540.72
Table 11. Comparison of the proposed method with other methods using the IM-SB dataset.
Table 11. Comparison of the proposed method with other methods using the IM-SB dataset.
MethodsAlgorithm DetailsRecognition Accuracy of IM-SB Dataset (%)
Reiss et al. [59]Classification with multiclass AdaBoost73.67
Politi et al. [60]Statistical and physical features fused with SVM78.41
Yin et al. [61]Statistical features wrapped with Multilayer Perceptron (MLP)87.38
Proposed WorkStatistical, transform and frequency features fused with reweighted genetic algorithm90.17
Bold letters for Proposed Method Recognition Accuracy of IM-SB dataset.
Table 12. The SMotion dataset applied over the proposed classifier and other well-known statistical classifiers.
Table 12. The SMotion dataset applied over the proposed classifier and other well-known statistical classifiers.
SymbolsActivitiesLSVM (%)Random Forest (%)Proposed (%)
L1Standing91.8393.6594.84
L2Sitting down and getting up from chair90.3492.1893.77
L3Walking92.1894.6395.13
Mean Recognition Accuracy 91.45%93.48%94.58
Bold letters for Mean Recognition Accuracy of SMotion dataset.
Table 13. SMotion dataset results for different sliding windows.
Table 13. SMotion dataset results for different sliding windows.
Symbols10% Slide 30% Slide60% Slide
L178.4284.7994.84
L281.3686.6793.77
L483.4788.9295.13
Mean Recognition Accuracy63.70%74.23%94.58%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Jalal, A.; Quaid, M.A.K.; Tahir, S.B.u.d.; Kim, K. A Study of Accelerometer and Gyroscope Measurements in Physical Life-Log Activities Detection Systems. Sensors 2020, 20, 6670. https://doi.org/10.3390/s20226670

AMA Style

Jalal A, Quaid MAK, Tahir SBud, Kim K. A Study of Accelerometer and Gyroscope Measurements in Physical Life-Log Activities Detection Systems. Sensors. 2020; 20(22):6670. https://doi.org/10.3390/s20226670

Chicago/Turabian Style

Jalal, Ahmad, Majid Ali Khan Quaid, Sheikh Badar ud din Tahir, and Kibum Kim. 2020. "A Study of Accelerometer and Gyroscope Measurements in Physical Life-Log Activities Detection Systems" Sensors 20, no. 22: 6670. https://doi.org/10.3390/s20226670

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop