Abstract

The monitors of oscillometry blood pressure measurements are generally utilized to measure blood pressure for many subjects at hospitals, homes, and office, and they are actively studied. These monitors usually provide a single blood pressure point, and they are not able to indicate the confidence interval of the measured quantity. In this paper, we propose a new technique using a recursive ensemble based on a support vector machine to estimate a confidence interval for oscillometry blood pressure measurements. The recursive ensemble is based on a support vector machine that is used to effectively estimate blood pressure and then measure the confidence interval for the systolic blood pressure and diastolic blood pressure. The recursive ensemble methodology provides a lower standard deviation of error, mean error, and mean absolute error for the blood pressure as compared to those of the conventional techniques.

1. Introduction

Blood pressure (BP) always fluctuates concerning factors such as stress, exercise, disease, and inherent physiological oscillations [1]. However, the physiological fluctuations in BP, which can rise to 20 mmHg, have so far been neglected [2]. This means that physiological uncertainty is greater than the standard protocol’s margin of error. The problems of accuracy, precision, and uncertainty in the measurement of physiological variables have been continuously feared by practitioners [3]. Although the standard for expressing uncertainty in measurement states [4] that it applies to a wide range of areas, it has only been applied to measurements determined based on a series of observations obtained under repeatable conditions, a situation that can hardly be reproduced in physiological measurements. Blood pressure monitors with an embedded electrocardiogram (ECG) and photoplethysmograph (PPG) sensors have been used to measure BP in recent years. However, an oscillating blood pressure measuring devices have generally been used and studied to measure BP in many patients at homes, offices, and medical centers. These monitors generally provide a single BP value, such as systolic blood pressure (SBP) and diastolic blood pressure (DBP). However, it is not assumed that these values represent more accurate measurements than other results obtained from repeated BP measurements [5]. This means that there is no way to compare the same measurement because the results of the measurement are generally different and the same measurements can hardly be obtained using the new measurements [5]. In other words, BP measurements have sources of uncertainty, which are the errors that cause such errors to deviate from BP measurements (i.e., estimates) to the reference BP [6]. The uncertainty sources can be distinguished by random and systematic errors, where we will deal in more detail in Section 2 [6]. If BP measurement results are affected simultaneously by many sources of uncertainty, the distribution function of BP measurement results will converge close to a normal distribution as the number of uncertainties approaches infinity.

Few researchers have sought to determine uncertainties in physiological measurements [7, 8], but no attempt has been made to include compatibility with the estimated algorithms adopted in global figures of the quality characteristics of the physiological signals and confidence in measurement accuracy. Therefore, by providing an estimated range of BP measurements (i.e., confidence intervals), a way of assessing and expressing uncertainty in BP measurements should be provided [6]. Based on aggregated statistical data, repeatable, irregular, and extensive CI can provide risk signals to patients, clinicians, and families for blood pressure. The CI measurements are very important in BP estimates, but unfortunately few significant studies have been conducted to determine the CI of oscilloscope BP measurements. Many BP measurements are required to estimate each patient’s CI. However, it is difficult to measure each patient’s BP several times using an oscillator BP device because it is costly and time-consuming since repeatable circumstances for a reproducible BP measurement cannot be guaranteed [9]. Hence, a bootstrap technique was proposed to obtain a CI estimate from BP measurements using a small sample size [9]. Soueidan et al. also developed a new technology for noninvasive measurement by providing CI to SBP and DBP [10]. Nevertheless, these techniques did not meet the tolerable bias specified by the standard protocol [11]. Lee and Chang. proposed a deep neural network (DNN) technique to estimate BP measurements to solve this problem [12]. Lee and Chang also provided a method to obtain accurate BP estimates using the DNN ensemble estimator [13]. The main contribution of these techniques is that the accuracy and stability are improved using the DNN and DNN ensemble. However, the disadvantage of this technology is that it has many parameters and complex structures. Therefore, we propose a machine learning structure such as a support vector regression- (SVR-) based recursive ensemble methodology (SVREM). Although this method greatly reduces complexity, it is also close to the DNN technique in terms of performance. The proposed technique is used to accurately estimate BPs and then measure the uncertainty for the SBP and DBP. This paper is an expanded version of the paper [14] with the following contributions: (i)We propose a novel methodology using the SVREM to accurately estimate and measure uncertainty for the SBP and DBP. The proposed methodology is a way to measure uncertainty such as CIs, a standard deviation of the error, bias, standard uncertainty, and expended uncertainty for the SBP and DBP(ii)We perform Lilliefors tests to validate the distribution of the artificial BP features approaching the Gaussian distribution, and to identify similarities between the actual data and the artificial data

2. Materials and Methods

2.1. Features Obtained from Oscillometric Signals and Artificial Data Obtained Using Bootstrap Technique

To estimate the reference BP value, we removed outliers using a signal processing technique and the effective features of the oscillometric waveform (OMW) signals were extracted [15]. Because the five BP data for individual volunteers were small amounts as an input data for the training process, we used the bootstrap method [16] to increase the amount of blood pressure data for each volunteer, where this data was called as artificial data or features in this study. The artificial input data were generated using the bootstrap technique [9, 16] to improve estimation accuracy using limited data sets in difficult situations when enhancing accuracy with traditional approaches. More details regarding these features can be found in [15].

2.2. The Goodness of Fit Test for Artificial Features

The normality assumption is the key to a majority of standard steps [17]. We thus verify the normality of the artificial feature. The Lilliefors test is executed to evaluate the normality of each artificial feature as well as correct the Kolmogorov-Smirnov test of goodness of fit for small values at the tails of probability distributions. Here, we assume that is a probability distribution for an artificial feature , where denotes the size of replication. We measure the homogeneity between the Gaussian distribution hypothesis and the distribution of artificial features [18]. If the Lilliefors test returns a decision value for the null hypothesis, then the artificial feature comes from a normal distribution family, against the alternative that it does not come from such a distribution utilizing a Lilliefors test function. The result denotes one if the test rejects the null hypothesis at the 5% significance level, and zero otherwise. We can accept the null hypothesis as represented in Table 1. Also, all values in the Lilliefors test are greater than (=0.05) and that null hypothesis is rejected if the Lilliefors test value is larger than the critical values cv. Therefore, we can accept the null hypothesis that the distribution of artificial features converges to the normal distribution [19]. We examine the consistency and convergence for the artificial data [16]. Therefore, we verify that our artificial data are suitable for actual data convergence for sample means based on the theorem [20] in that if , then where is . The distribution of approximates to [20], where denotes a bias and denotes the original feature. When the bias approaches zero, estimates are considered to be unbiased and we can easily compute uncertainties such as the bias and standard error for the artificial features as shown below: where denotes the prediction error (i.e., bias). where is the standard error using the bootstrap technique and is .

3. SVREM for BP Estimation

1: procedure SVREM(X, Y)
2:  for , do
3:   for , do
4:    for , do and ,
5:      and ,
6:      and
7:     
8:     
9:    end for
10:   end for
11:   call learning: ,
12:   output: ,
13:   
14:   
15:   
16:   
17:   
18:   
19:  end for
20: end procedure
3.1. Support Vector Regression (SVR) Estimator with Artificial Features

Assume that one has a training data set to find , where and are the vectors acquired from reproducing kernel Hilbert space . Thus, we minimize the following risk function as where is regularization parameter that balances the complexity and accuracy of a regression model, is the normalization constant utilized to balance the empirical risk with the normalized term, and denotes the -loss function [22] given by where is a user-define value.

This problem can be solved by the Lagrangian theory as follows: where , , are the Lagrangian multipliers with respect to the constraints given in Equation (7), and the solutions are given by where and is the Kronecker symbol. This problem can be resolved using some disassemble methods of the SVR [22].

3.2. SVREM

Here, we introduce the SVREM based on the AdaBoost [23] technique that is used with the SVR as a base learner [22]. In this paper, we utilize SVREM to improve the performance of the SVR model. Our input features are given as , where and denote the input matrix and the output matrix, respectively. Then, the mean and standard error are calculated for each feature vector, respectively. The bootstrap technique is then used as a creator to build the distribution for artificial feature because the distribution of artificial feature approaches a normal distribution [12]. Hence, ensemble parameters are used to solve random initialization parameter problems and are used in the training step. We execute each artificial feature after the recursive adjustment of the distribution of training data sets using the SVREM step, as presented in Algorithm 1. A weight vector is initialized to be utilized in line 2. We then create a different training set for each estimator according to the weighted sample, from the sequence of with , where denotes the number of artificial samples . Note that, in , and denote the number of replication and the number of subjects, respectively. Also, note that in Algorithm 1 denotes the number of the feature as represented in lines 4-8. In detail, the artificial samples are obtained from different distributions, which are updated repeatedly through relative errors and estimated BPs to be used in the next estimator. The weight of each instance is updated based on an error. In other words, it is more likely that an instance with a large error in the previous distribution exists in the next distribution as shown in line 7. The estimated BPs (SBP and DBP) are also initialized as another input matrix for training SVREM. The estimate BPs are thus concatenated to an artificial block matrix as and are updated recursively as shown in algorithm line 8, where denotes the previous BPs’ estimator. This is a novelty method as the SVREM is different from the conventional AdaBoost way [23]. The SVREM is then executed to optimize the parameters as shown in line 11. In turn, we repeatedly calculate the error between the hypothesis and reference until the minimum value is reached as expressed in line 13. Here, the estimated BPs are used as an input feature to train the recursive ensemble estimator. If the error is smaller, the artificial block matrix for the next estimator in SVR is closer to the reference BPs . The mean error is computed as shown in lines 13-15, and the weight parameter is represented as in line 16. Finally, we update the weight vector for instances and normalize them as shown in steps 17-18. If the error values for each instance are very small, the weight parameters also have a small value, respectively, and if the error values for each instance are large in the current iteration, also has large values. The output of the SVREM is given as follows: , where each estimator predicts , . If is all equal, it will be a median. We sum up the log until we reach the smallest so that the inequality can be satisfied [23], based on the theorem and proof about the SVR utilized as a stable estimator given as a base learner. .

3.3. CI Estimation with the Parameter Bootstrap Technique

Due to the physiological characteristics and cost reasons for individual subjects, it is not easy to obtain many BP measurements [9]. Even if the cost is not an issue, the experimental conditions may not produce reproducible measurements. Hence, we thus estimate CIs based on the parameter bootstrap method using the results of the SVREM approach. The basic concept of this technique can use the uncertainty ranges of each BP measurement value to calculate the maximum and minimum values for the CI. Thus, we offer the CI of the five BP estimates for each patient obtained from the SVREM algorithm and explain the bootstrap technique. The idea is to resample blood pressure hypotheses to produce many artificial blood pressure hypotheses, , based on estimates obtained from an unknown distribution to calculate a CI for . Here, denotes the maximum likelihood estimate obtained using . Hence, when , we obtain a normal distribution given as . Here, we measure the CIs using the bootstrap technique [9, 16] which can be obtained using the BP estimates of the SVREM. We then obtain a matrix as where Equation (9) is obtained as expressed in line 5 of algorithm 2; we vertically compute each column to acquire the mean of each column as in line 7, where denotes the resampled data acquired from the bootstrap technique. Then, we conduct ascending sorts and then the sorted BP estimate is given by assuming is the percentile of bootstrap replicationsWe can obtain the CI as of the , from this bootstrap technique. A similar process is used to estimate the CI for DBP.

1: procedure CI
2:  for , do
3:   
4:   
5:   
6:   for , do
7:     
8:   end for
9:   
10:  end for
11: end procedure

4. Experimental Results

4.1. BP Measurements and Protocol

The study was confirmed by the ethics committee, and each test subject agreed for BP measurements. The BP data were measured from 85 people who do not have cardiovascular disease, from ages 12 to 80 years old with 48 men and 37 women. A wrist-mounted blood pressure device was used to measure 5 sets of oscillometric measurement BP from each subject on ANSI/AAMI protocol criteria [11, 18]. The average value measured by the two experts was used as the reference value for SBP and DBP [9]. This process was repeated four more times to generate 5 sets of BP data for each subject, with a one-minute break between each BP measurement. Each subject comfortably sat on a chair for BP measurement, wrapped in a BP cuff around the subject’s left wrist and placed her or his arms on the desk to measure BP. As a reference monitor, an auscultatory BP cuff was worn at the top of the left arm to match the height of the heart. When the air was pumped into the cuff, the upper left cuff expanded and blocked the brachial artery. On the other hand, when the pressure in the upper cuff was deflated, the blood flow generated a Korotko signal that was heard by a stethoscope. The first signal measured in mmHg when the pressure in the upper cuff was deflated, the blood flow generated a Korotko signal that was heard by a stethoscope. The first signal measured in mmHg in the upper left meter estimated the SBP and the fifth signal estimated the DBP [15]. The left upper arm and wrist BP could not be measured at the same time because of the difficulty of occlusion of the brachial artery through left upper-arm sphygmomanometers. Thus, each BP signal was obtained by a wrist measuring device and after 1.5 (min.) two medical staffs simultaneously measured the SBP and DBP using a sphygmomanometer.

The BP measurements of subjects were gradually separated into training data and test data. Five sets of 340 BPs each from 68 subjects, 5 measurements were used as training data, and 85 data from 17 subjects were used as test data. This phase was repeated so that each subject was included only once in the test process. An exemplary result was represented in order to differentiate from the artificial features to original features as expressed in Table 2. Note that the running time is computed based on the Matlab® 2019 [24]. The SVREM provided much lower computational time compared with the DNN ensemble technique [13] as denoted in Table 3. On the basis of BP measurement protocol AAMI [11], the SVREM algorithm was evaluated to verify that the mean error (ME) is less than ±5 mmHg and that the standard deviation of the error (SDE) of the ME is less than 8 mmHg, as expressed in Table 4. The SVREM in accordance with the British hypertension protocol (BHS) [1] was compared with the conventional algorithms, and the mean absolute error (MAE) was evaluated for three groups of less than 5 mmHg, less than 10 mmHg, and less than 15 mmHg, respectively. If 60% of the MAE of a BP measurement method is within 5 mmHg, 85% within 10 mmHg, and 95% within 15 mmHg, the method is classified as class A. We then provided the results of SVREM in terms of the CIs as denoted in Table 5. We expressed the results of a statistical analysis based on the Lilliefors function to evaluate the normality in terms of the artificial BP estimates for each subject as represented in Figure 1.

5. Discussion

We confirmed that there was little uncertainty, such as bias against artificial data, and that standard errors have very little value , acquired using bootstrap technology, and was smaller than ; hence, the bootstrap technique was utilized as an effective method for increasing the number of samples of features. Accordingly, artificial features could be found to be very close to actual features, and it was also confirmed that the CIs of artificial features included all artificial and true features. Although the SVREM represents a slight loss of performance compared with the DNN ensemble technique [13], as expressed in Tables 3 and 4, we confirmed a significant reduction in running time. The mean absolute errors (MAEs) of the SVREM were 69.17% (≤5 mmHg), 87.06% (≤10 mmHg), and 95.53% (≤15 mmHg), respectively, for SBP and 76.94% (≤5 mmHg), 94.12% (≤10 mmHg), and 98.12% (≤15 mmHg), respectively, for DBP as expressed in Table 4. Hence, the SVREM acquired class A for the evaluation of SBP and DBP. We found that the SVREM estimates show better accuracy than the BP estimates except for the DNN ensemble in the conventional techniques. The accuracy of the estimates acquired from the SVREM was computed by comparing the estimates acquired using the stethoscope method according to the AAMI protocol [11] concerning ME and SDE. Note that the SDE of a device is more significant than ME because it can have small MEs with large SDEs that are probable to be positive or negative, which can be very inaccurate. The performance improvement of SVREM is critical in that the AAMI standard protocol is recommended for automated BP devices. The proposed SVREM satisfied the AAMI criteria, and these results represented that the proposed SVREM provided more accurate BP estimates, except for the DNN method, compared with the existing method expressed in Table 4. The SDE acquired from the SVREM was found to be 6.29 mmHg and 5.33 mmHg for the SBP and DBP, respectively. These results indicate superior performance compared to conventional algorithms. Consequently, we conclude that the SVREM decreases uncertainty such as the SDE of ME and increases the performance reliability. The CIs for SBP and DBP acquired with SVREMB were smaller than CI acquired in the conventional methods, as shown in Table 4. We confirmed the difference between 1.4 mmHg and 1.0 mmHg of the SVREM and the conventional method [9] in the SDEs of CI for SBP and DBP, though the CI results acquired from the SVREM are smaller than those acquired from the . The CIs concerning the artificial BP values represented very small values, which means that we successfully decreased the uncertainty in terms of a random error such as SDE and system error such as bias based on the SVREM.

6. Conclusion

In conclusion, the proposed methodology is that the accuracy and stability increase using the SVREM, which is to measure uncertainties such as CI, the standard deviation of the error, and bias for the SBP and DBP. We verify that the distribution of the artificial BP data converges to the Gaussian distribution, and to identify similarities between the real data and the artificial data. The Lilliefors test was conducted to investigate the normality of artificial BP measurements for each subject. We will pursue additional nonnormally testing for new subject populations in our future work.

Data Availability

Data is not available for the following reasons: In this paper, we received BP data from University of Ottawa and conducted an experiment, but without the consent of University of Ottawa, the author cannot judge whether data is available or not.

Ethical Approval

The study was confirmed by the ethics committee, and each test subject agreed for BP measurements.

Conflicts of Interest

The authors declare that they have no conflict of interest.

Authors’ Contributions

S. L. is responsible for the conceptualization of this paper, S. L. and G. L are assigned for the software, S.L.is for the writing (original draft), and G. L is for the review and editing.

Acknowledgments

The author would like to thank professors M. Bolic, H. Dajani, and V. Groza for providing BP data at the University of Ottawa and S. Rajan at Carlton University for always giving good advice. This work was supported by new professor research fund of the Sejong University 2019.