Significance: Speckle noise limits the diagnostic capabilities of optical coherence tomography (OCT) images, causing both a reduction in contrast and a less accurate assessment of the microstructural morphology of the tissue. Aim: We present a speckle-noise reduction method for OCT volumes that exploits the advantages of adaptive-noise wavelet thresholding with a wavelet compounding method applied to several frames acquired from consecutive positions. The method takes advantage of the wavelet representation of the speckle statistics, calculated properly from a homogeneous sample or a region of the noisy volume. Approach: The proposed method was first compared quantitatively with different state-of-the-art approaches by being applied to three different clinical dermatological OCT volumes with three different OCT settings. The method was also applied to a public retinal spectral-domain OCT dataset to demonstrate its applicability to different imaging modalities. Results: The results based on four different metrics demonstrate that the proposed method achieved the best performance among the tested techniques in suppressing noise and preserving structural information. Conclusions: The proposed OCT denoising technique has the potential to adapt to different image OCT settings and noise environments and to improve image quality prior to clinical diagnosis based on visual assessment. |
1.IntroductionOptical coherence tomography (OCT)1 is an optical imaging technique that allows cross-sectional views of in vivo tissue in real time with micrometer resolution and at depths of up to two millimeters. OCT is widely used in a variety of biomedical and clinical fields, such as ophthalmology (as a routine noninvasive tool for the diagnosis and monitoring of disease progression)2 and cardiology (as a catheter-based imaging system during coronary intervention).3 In dermatology, OCT has been used to study a variety of dermatological disorders.4,5 The most significant dermatological applications of OCT are in assessing non-melanoma skin cancers such as basal cell carcinoma (BCC) and in reducing the need for diagnostic biopsies.6–8 Because it uses spatially coherent illumination, OCT images are affected by the speckle phenomenon, which has a dual role as both a source of noise and a carrier of information.9,10 Speckle carrying signal information is the result of the back scattering of the incident photons, whereas speckle noise is caused by the random interference between multiple reflected photons coming from multiple directions. Speckle noise gives a grainy appearance to the OCT images, which degrades the signal-to-noise ratio (SNR) and limits the accuracy of its interpretation. The speckle properties are affected by the scale representation, the optical settings, and the scattering properties of the biological tissue.11 Speckle denoising is an active and widespread research field developed during recent years. There are two main approaches: those that modify the image configuration (where the optical setting or the scanning protocol can be adjusted) and those that are based on postprocessing the images via digital algorithms. In the first group, there are three main approaches to the compounding techniques: using the frequency,12 using the angle of the incident light source,13 and using multiple A-lines collected in a controlled way.14 Other studies have proposed methods for combining image processing and modifications to the optical configuration, with the aim of increasing the SNR even further.15 To compensate for the effect of possible movement in the case of in vivo imaging, two strategies for reducing motion artifacts have been proposed: using a weighted average of individual A-scans16 and using a combination of the average of B-scans (from a high-speed CMOS line-scan camera) and a cross correlation of different frames with respect to a fixed reference. One recent report suggests that SNR improvements via pure angular compounding techniques will be limited by optical aberrations.17 To overcome these limitations, a recent study proposed the combination of angular compounding with geometric image registration and digital focusing.18 The main advantage of digital speckle-reduction techniques is that these can be applied to almost all two-dimensional (2D) and three-dimensional (3D) images acquired by an OCT device without changing the acquisition setup. However, they usually add to the computation requirements and can affect the resolution of the image. These techniques can involve the combined use of several methods such as local averaging over neighboring A-scans of each tomogram,19 averaging multiple B-scans,16 applying rotation kernel transformations to each tomogram,20 image regularization,21 complex diffusion filtering,22 curvelet transforms,23 applying sparse representations or low rank models using patches in the images,24–27 autoencoding based on a neural network that learns from a figure of merit,28 digital filtering clusters of pixels with similar optical properties,29 and adaptive nonlinear diffusion filtering.30 State-of-the-art general-purpose denoising filters such as probability-based non-local-means (PNLM) and block-matching 3D filtering have also been adapted successfully for the removal of noise from OCT images.31–33 Recently, approaches based on deep learning, such as denoising convolutional neural network (DNCNN),34 have been proposed for natural image denoising and for speckle noise in OCT,35–39 showing the potential of these techniques in a variety of OCT image types. Algorithms based on filtering in the wavelet domain40 have shown excellent performance in speckle noise removal. One approach used in these methods is to filter the detail wavelet coefficients in multiple subbands to minimize the noise. The calculation of an appropriate threshold and its application can be performed using spatially adaptive soft-thresholding with estimation of the noise in one subband.41,42 However, a recent study shows that speckle noise can have different magnitudes for different wavelet subbands and that estimation of the noise variance at individual scales can improve the characterization of the threshold and can improve the speckle noise removal.43 Another successful strategy is the compounding of several frames in combination with digital filtering,44 more specifically with frames previously filtered using wavelet denoising.45 The use of volumetric data collected from consecutive B-scans has also been exploited by other methods23,46,47 as an additional source of information for use in speckle reduction. Here, we present a method for speckle reduction of OCT images that effectively combines an adaptive noise strategy with volumetric wavelet compounding. The method first processes several frames acquired at consecutive locations, using a multiscale noise adaptive wavelet filter, followed by an appropriately weighted computation for the compounding. Following Zaki et al.,43 we estimate the noise-variance wavelet representation in a homogeneous scattering sample and use it to filter the noise in all frames. To take advantage of complementary information brought from several acquisitions, we used Mayer et al.’s45 compounding approach as a second step, applying a weighting method to select the detail coefficients of each subband before compounding all of the filtered frames. Our method also benefits from the fact that the wavelet representation of the speckle statistics, calculated properly from a homogeneous sample or a region of the noisy volume previously recorded, can be used to characterize the noise pattern of the system. These reference statistics are a valuable asset when differentiating information from speckle noise. In addition, the use of several frames and the wavelet weighted compounding improves speckle removal and enhances the structural details, resulting in a superior performance to that of other state-of-the-art methods. The rest of the paper is structured in several sections. In Sec. 2, we describe in detail the proposed method, the OCT imaging systems used, the datasets, the performance metrics, and the quantitative evaluation process used in the assessment. In Sec. 3, we report the results and discussion of the parameter assessment and the quantitative evaluation of the algorithm. Finally, Sec. 4 contains the conclusions of the study. 2.Materials and Methods2.1.Wavelet-Compounding Adaptive Noise FilterFigure 1 shows a complete overview of the proposed wavelet-compounding adaptive noise filter (WCAN) method. The processing steps are wavelet transform, variance computation of the input and reference frames, adaptive variance compounding, weight computation, wavelet coefficient weighting, averaging, and inverse wavelet transform. The input to the algorithm is a set of OCT frames () and a set of reference OCT frames (), with both sets acquired at consecutive positions. The reference frames could come from a homogeneous scattering sample independent of the input frames or from a homogeneous region of the set of input frames. This reference set should be collected as a prior step to the application of the method. The output of the algorithm is a single OCT denoised image. The terms “frame” and “image” from one side and the terms “method,” “filter,” and “algorithm” are used interchangeably in this paper. In the following paragraphs, we explain in detail each step of the method. 2.1.1.OCT image noiseThe speckle noise is usually modeled as multiplicative due to the multiple backscattering effects.48 In addition to the speckle noise, OCT systems are also affected by noise coming from different sources such as inherent laser intensity noise, photonics shot noise, or thermal noise from electronics. These other types can be assumed to be white Gaussian additive noise.49 We define an OCT image as where is the noise-free image and and are the speckle noise and the Gaussian white noise, respectively. We can neglect the additive white noise because it is significantly small compared with the multiplicative noise.50 Thus, we model the OCT image asTo convert the multiplicative noise into additive noise, we consider that all of the OCT images are logarithmic transformed following the approach in similar studies.50–54 This operation allows us to assume a near additive noise model after the logarithmic transformation. We assume also that speckle noise present in frames recorded at different consecutive positions is uncorrelated.45 2.1.2.Wavelet transformAll of the input images are decomposed by a wavelet transformation with a maximum decomposition level of . The result of the transformation is a set of approximation coefficients and detail coefficients , where is the frame number, is the decomposition level, and is the orientation or direction of the detail coefficient (horizontal, vertical, or diagonal). The wavelet transformation is computed using the 2D discrete stationary wavelet transform.40 Figure 2 shows a representation of a 2D wavelet decomposition of an image with three levels, where contains the approximation coefficients and , , and are the detail coefficients of each subband for the horizontal, vertical, and diagonal orientation, respectively. 2.1.3.Variance computationThe next step is the calculation of the variance of the detail coefficients of all of the subbands of the input and reference frames. For the computation of the variance of the detail coefficients of each subband , we use the following expressions: where is the subband level (between 1 and ), is the orientation (horizontal, vertical, or diagonal) of the detail coefficient , and is an index ranging from 1 to (total number of coefficients with orientation for the subband ). Equation (3) uses , the mean of all detail coefficients for the orientation and subband , calculated using Eq. (4). In the case of the reference frames, we calculate the variance, , considering all frames, and therefore, ranging in this case, from 1 to the number of coefficients by subband and by orientation plus the number of frames .2.1.4.Adaptive variance compoundingThen, we calculate the adaptive compounding terms , which consider an average of the ratio of the variance terms of the initial frames and the reference frames, including also the detail coefficients for the variance of the frame , using the following expressions: where is the subband, is the frame number ranging from 1 to (total number of input frames, see Fig. 1) and is the orientation of the detail coefficient (horizontal, vertical, or diagonal). The term is the detail coefficient of the subband , orientation , and frame . The terms and are the variance of the detail coefficients from the frame and from the reference frames respectively, calculated using Eq. (3).2.1.5.Weight computationThe weights provide estimates of the noise contribution of each subband to every frame in relation to the reference noise and to the remaining frames. The goal is to reduce the detail coefficients in each subband with higher variance compared with the reference values and the remaining frames, assuming that noise will be the main cause of this higher contribution. The term in Eq. 6 is an adaptive variance threshold defined in a similar way to standard wavelet thresholding, but considering the variance of the reference frames in each subband. Equation (7) produces the final weights for each subband , frame , and orientation , comparing the value of the detail coefficients with the previous threshold value . The parameter will then be used to balance the final amount of noise reduction (in Sec. 3.1, we present more detail of the influence and utility of this parameter). The ratio from Eq. (5) is used to reduce the weight of the detail coefficients lower than the threshold . 2.1.6.Wavelet coefficient weighting, averaging, and inverse wavelet transformThe new detail coefficients of each subband , orientation , and initial frame are computed using the previous weights for all positions , via the following expression: The detail and the approximation coefficients of the denoised image (see Fig. 2) for each subband are calculated by averaging the coefficients of the initial frames considered during the process: Finally, the denoised image is computed by applying the inverse wavelet transform over the averaged coefficients. The algorithm was implemented in Matlab 2018b (MathWorks, Inc., Natick, Massachusetts) on a personal computer (Intel 3.3 GHz CPU, 32 GB memory). 2.1.7.Data and OCT imaging systemsFor the quantitative evaluation of the method, we used datasets representing four different environments, as specified in Table 1. Datasets Medical University of Vienna (MUW)-1, MUW-2, and MUW-3 each comprised 18 skin OCT volumes and were acquired via three different OCT image systems. MUW-1 and MUW-2 were custom designs from the Center for Medical Physics and Biomedical Engineering at the MUW. To assess the applicability of the method to a commercial device, the MUW-3 image dataset was acquired via a VivoSight OCT scanner (Michelson Diagnostics, Kent). The A2ASDOCT dataset is a public dataset comprising 17 retinal volumes from the A2ASDOCT study.59 It was produced using a Bioptigen Inc (Research Triangle Park, North Carolina) spectral domain OCT imaging system24 and involved 17 eyes from 17 subjects with and without nonneovascular age-related macular degeneration (AMD). Each set of images in this dataset comprises five frames of from consecutive positions, each of which includes the fovea region. The main technical specifications of the imaging systems are summarized in Table 1. Table 2 gives additional details about the dermatological datasets (MUW-1, MUW-2, and MUW-3) and the retinal dataset (A2ASDOCT). Table 1Technical specifications of the OCT imaging systems used in the acquisition of the volumes in this study.
Table 2Description of the datasets. The volumes of the three dermatological datasets were acquired and evaluated by experts from the Department of Dermatology at the MUW. The identified sample area (if available) is given in the fourth column. The results of the clinical assessments are given in the fifth column.
The method requires the use of a reference set ( in Fig. 1). The role of this reference is to characterize the speckle noise assuming, as stated by Zaki et al.,43 that the variation in these uniform areas is determined by random noise. The calculation of the noise variance is based on Eq. 3 over a region of interest (ROI) selected from the reference frames. We used two strategies to calculate this variance. First, we used a homogeneous scattering phantom made of synthetic clay (Blu-Tak®; Bostik, Wauwatosa, Wisconsin). This approach was tested with the MUW-1 OCT image system and the MUW-3 VivoSight device. Using the MUW-1 system, we acquired 512 -scans with dimensions of from consecutive positions. Using the VivoSight device, we acquired 120 B-scans with dimensions of from consecutive positions. In our second strategy, we selected a ROI in the same location (top or bottom) of the frames from both systems and computed the variances of the coefficients of four wavelet detail subbands. This involved selecting a homogeneous noisy ROI and computing the corresponding variance of the detail coefficients, considering all frames in the dataset. We selected the location of the ROI (top or bottom) to have enough data in the all frames of the volume. We used this strategy as an alternative approach for the MUW-2 and A2ASDOCT datasets, partly because a phantom was unavailable and partly to test an alternative approach to noise estimation. Figure 3 shows examples of the acquisition of the phantom and the selection of the ROIs for both strategies, and Table 3 summarizes the reference sets used for each OCT imaging system. We applied the same reference set to all volumes of each dataset (MUW-1, MUW-2, MUW-3, and A2ASDOCT). Table 3Summary of the reference sets used in the application of the method WCAN for all of the datasets. M indicates the maximum number of frames in the reference set (RF1,RF2…RFM). The ROI Dimensions column refers to the size of the white rectangles shown in Fig. 3. 2.2.Quantitative evaluationWe evaluated the efficacy of the algorithm, using common speckle-reduction performance metrics41,43,44 via a comparison with different state-of-the-art methods. These quantitative metrics were the SNR, the contrast-to-noise ratio (CNR), and the equivalent number of looks (ENL), as expressed below. SNR is defined in Eq. (11), where indicates the mean of the OCT image and refers to the noise variance. In the definition of CNR [Eq. (12)], and are the mean and variance for a background noise region, respectively, with and being the mean and variance for all ROIs, respectively, including homogeneous and heterogeneous regions. ENL is a measure of the smoothness of a homogeneous ROI. In Eq (13), and are the mean and variance of all homogeneous ROIs, respectively. Except for the SNR calculations, the remaining parameters were computed from the logarithmic OCT images. To assess the overall influence of the quantitative metrics, we include two additional figure-of-merits (FOMs):44 where Norm refers to the normalization of the metric, that is, the method that performed the best in, for example, the SNR criteria, would have an equal to one. Therefore, an of three would indicate that the method performed the best in all image-quality metrics (SNR, CNR, and ENL). The metric is used to assess the robustness of the methods and to detect unbalanced combinations of SNR, CNR, and ENL.We compared the performance of the WCAN algorithm with respect to other state-of-the-art methods via two different approaches. First, we used 2D filters that have previously demonstrated excellent performance in OCT speckle reduction such as PNLM,32 complex wavelet based K-SVD (KSVD),46 and noise adaptive wavelet thresholding (NAWT)43 or in general image denoising (DNCNN).34 Next, we used two 3D filters: i.e., the wavelet multiframe algorithm (WVMF)45 and TNODE.47 Table 4 gives sources for implementations of these methods. Figure 4 shows the process of evaluation for the various methods. All steps were performed over all frames in every dataset. The denoised OCT frames generated as outputs were compared using the SNR, CNR, and ENL performance metrics. Table 4State-of-the-art denoising software used in the assessment of the proposed method.
3.Results and Discussion3.1.Parameter AssessmentBefore applying the method to all datasets and comparing it with the state-of-the-art filters, we evaluated the impact of the two main parameters of the algorithm, i.e., [the threshold parameter in Eq. (3)] and (the number of frames to consider in the compounding process). We used Volume 2.1 (see Table 2) and the reference frames from Volume R.2 (Table 3) for this purpose in the two experiments outlined in Fig. 5. First, we varied the parameter from 0.3 to 1.5 in steps of 0.1, with set to 3. In the second experiment, we varied the parameter from 2 to 6, with set to 1.0. After applying the WCAN method in both cases, we calculated the quantitative metrics (SNR, CNR, and ENL) and the resulting (only FOM for the rest of this section). We then computed the difference in the FOM values for each enhanced frame for two consecutive values of (such as and 0.3) and (such as and ). This process generated a set of FOM-value differences for each pair of consecutive values and a set of for each pair of consecutive values. Figure 6 gives the main results and some examples of the application of the WCAN method to Volume 2.1. We observe that the improvement in the quantitative metrics reaches a maximum between and 0.9, with a mean improvement in FOM of 0.213, and any further increase in decreases the incremental improvement. Note that this does not imply that is the optimal value to use but that is the value for which we have the greatest improvement with respect to the previous value of . Although increasing the value of always produces a quantitative improvement in the image, there are diminishing returns from increasing beyond . We can observe the same trend for the parameter , the maximum mean improvement of FOM, i.e., , occurs between and . The image examples in Fig. 6 show how increases in and produce better noise reduction in the resultant image. One of the main advantages of the proposed method is that it is easily adaptable to very different OCT settings by simply adjusting these two parameters. To find appropriate values of and for the quantitative assessment, we used the first volume in each dataset (Volume IDs 1.1, 2.1, 3.1, and 4.1 for the MUW-1, MUW-2, MUW-3, and A2ASDOCT datasets, respectively, in Table 2 and the references frames with Volumes IDs R.1, R.2, R3, and R.4 I in Table 3). Table 5 gives the final values chosen for each dataset, having been determined empirically as a suitable balance between performance metric improvement, qualitative visual quality, and execution time. Table 5Parameters used in the assessment of the WCAN method for each dataset. N is the number of frames considered at consecutive positions in the compounding process. k is the adjustment parameter for the thresholding process. The number of decomposition levels, L, was set to four for all datasets.
We also evaluated the influence of the wavelet family in the calculation of 2D discrete stationary wavelet transform. We tested the wavelets families Haar, Daubechies (db1-db10), Symlets (sym2-sym8), Coiflets (coif1-coif5), and BiorSplines (bior1.1, bior1.3, bior1.5, bior2.2, bior2.4, bior2.6, bior2.8, bior3.1, bior3.3, bior3.5, bior3.7, bior3.9, bior4.4, bior 5.5, and bior6.8) using the Volume ID 2.1 and the reference set R.2 of Table 3. The Table 6 presents the results of the wavelet with the best performance in each family. As we can observe, the Haar wavelet shows the best performance and was the wavelet selected for the rest of the experiments. Table 6Quantitative evaluation of the wavelet families Haar, Daubenchies, Symlets, Coiflets and BiorSplines, considering the Volume ID 2.1 and the reference set R.2 from Table 3. The results show the mean ± standard deviation of the improvement with respect to the initial metrics presented in Table S1 in the Supplementary Material (raw images).
3.2.Quantitative EvaluationWe now present the results of our quantitative evaluation of the application of the compared methods to the complete stack of frames for all four datasets in this study. In the evaluation, we used the same ROIs for all frames in the same volume to maintain a consistent reference (Fig. 7 shows one frame with the ROIs used for Volume 1.1). The detailed values of the metrics for all datasets are included in Tables S2–S7 in the Supplementary Material. In Tables 7Table 8Table 9–10 and Fig. 8, we present the aggregate results. Tables 7Table 8Table 9–10 show the mean ± standard deviation of the improvement with respect to the initial metrics presented in Table S1 in the Supplementary Material (raw images). The values corresponding to the best performance for the individual metrics and for the FOMs are shown in bold. Table 7Quantitative evaluation for the MUW-1 dataset, considering the results of the application of all filters to Volumes 1.1–1.5 (see Table 2) and the reference frames R.1 (see Table 3). The compounding was performed using three B-scans per position.
Table 8Quantitative evaluation for the MUW-2 dataset, considering the results of the application of all filters to Volumes 2.1–2.5 (see Table 2) and the reference frames R.2 (see Table 3). The compounding was performed using three B-scans per position.
Table 9Quantitative evaluation for the MUW-3 dataset, considering the results of the application of all filters to Volumes 3.1–3.8 (see Table 2) and the reference frames R.3 (see Table 3). The compounding was performed using two B-scans per position.
Table 10Quantitative evaluation for the A2ASDOCT dataset, considering the results of the application of all filters to the 17 retinal sets (see Table 2) and the reference frame R.4 (see Table 3). The compounding was performed using four B-scans per position.
To facilitate comparison between metrics, we used the normalized value of each metric, calculated from the maximum values for all filtered images in each group of frames [see Fig. 4(b)]. We can then refer to metric values corresponding to the best filtered output for each group of frames to make fair comparisons across the volume. We should first note that the best results across all datasets were accomplished by WCAN, with the highest values for and standard deviations between the smallest (MUW-3) and the second smallest (MUW-1, MUW-2, and A2ASDOCT). We have the same situation with the metric , where WCAN presents the highest values. The TNODE algorithm had the second-best performance across all datasets. WCAN was also the best option, in most cases, when considering each metric separately. It returned the best CNR and ENL results for all datasets (except for MUW-1, where it had the second highest ENL). WCAN had the best SNR score for MUW-1, was second for MUW-2, and third for MUW-3 and A2ASDOCT. Another notable aspect of the WCAN results is its consistent performance with very different OCT image settings, image samples, and locations on the skin (see Table 2) and retina. As is clear from Figs. 8(a) and 8(b), other filters such as PNLM and KSVD perform very differently for the retina dataset (A2ASDOCT) than for the skin datasets (MUW-1, MUW-2, and MUW-3). Figure 9 shows the significant differences in the OCT images generated by applying the state-of-the-art methods to Volume 1.1 and the reference frames R.1. Videos S1–S4 (see Figs. 13–16 in the Appendix) show the results of the application of the method over frames of the datasets MUW-1, MUW-2, MUW-3, and A2SDOCT. Simple averaging was also evaluated. We used three volumes from datasets MUW-1, MUW-2, and MUW-3, where we have enough frames to perform the experiment in each volume. In particular, we considered Volumes ID 1.1, 2.1, 3.1 (Table 2) and the reference sets R.1, R.2, and R.3 of Table 3, respectively. We applied the WCAN method to each volume using the parameters previously identified in Table 5. In particular, we applied the WCAN to the three first frames of Volume ID 1.1 and 2.1 with and to the first two frames of Volume ID 3.1 with . Then, we performed the average of frames iteratively until the metric of averaging frames improved the metric of WCAN previously calculated. The results showed that to improve WCAN we needed to average 12, 27, and 13 frames for the Volumes ID 1.1, 2.1, and 3.1, respectively. Figure 10 shows an analysis of some examples of the WCAN algorithm’s performance with skin images corresponding to frames of the MUW-2 and MUW-3 datasets. The first sample (Volume 2.4) provides information about a superficial BCC on the head. In Figs. 10(a) and 10(b), speckle reduction using WCAN [Fig. 10(b)] enables a stronger identification of the skin layers (marked with digits 1–3) than does the raw image [Fig. 10(a)]. In addition, the stratum corneum layer and the epidermis–dermis interface are more clearly distinguished. The boundaries and contrast are also enhanced, shown by comparing the area enclosed by the blue ellipse and the red arrows in the zoom-in boxes in the raw image [Fig. 10(a)] and after filtering [Fig. 10(b)]. The second example (Volume 3.8) shows a cheek with folliculitis. Figures 10(c) and 10(d) show a sharper difference between the regions indicated by the blue line and the red arrows and the structure indicated by the green arrow in the zoom-in boxes. Figure 11 enables qualitative comparisons for a retinal image from the A2ASDOCT database after applying each of the denoising methods in the study. Figure 11 shows that the WCAN algorithm gives significant noise suppression, which enables clear identification of the boundaries between retinal layers. Finally, to assess the computation time for the various algorithms we applied all methods of the study to 20 scaled versions of the first two frames of Volume ID 1.1. For the WCAN method we used and the reference frames R.1 from Table 3. The original size of the volume was two frames with . We used scales from 0.1 to 4 in steps of 0.2 (20 scales in total) to resize the frames. For each scale we applied all of the methods and calculated the execution time using the method timeit of Matlab. In Fig. 12 we present the results. As shown in Fig. 12(a) the application of the WCAN method requires less execution time than the other pure compounding alternatives such as TNODE and WVMF. The PNLM and NAWT methods presented the best performance among all of the algorithms tested [Fig. 12(b)]. Finally, to estimate the computational complexity of the algorithm, we applied a basic fitting of the performance metrics of WCAN, resulting a linear polynomial function of the type with , , and the norm of residuals equaling 0.699. So we can estimate that the time complexity of the method is linear. The tests were executed in Matlab 2018b (MathWorks, Inc.) on a personal computer (Intel 3.3 GHz CPU, 32 GB memory). 4.ConclusionThis paper has described a speckle reduction method for OCT volumes. The method is based on a multiscale adaptive noise-wavelet-compounding strategy. The method requires the estimation of the noise variance in the wavelet domain of the OCT setting. In this study, we used previously acquired images of homogeneous scattering examples. The noise variance is used to compute the weights for the detail coefficients of each subband of each frame in the volume before compounding all of the filtered frames. One of the major highlights of our method, and a novel contribution in the field of wavelet denoising, is the introduction of a new compounding scheme that integrates the noise variance influence of the input frames in relation to the variance of a set of reference frames. With this new approach, a visual inspection of the denoised frames presented shows the potential of the proposed algorithm to reduce the speckle noise efficiently without generating relevant denoising artifacts or producing a degradation of the structures present in the image. The simplicity, the ability of the method to adapt to different environments (OCT settings and noise levels), the robustness above all metrics considered, and its reduced computational time are its most significant advantages. Another potential benefit is the capacity of the method to adjust the amount of noise reduction by the modification of the parameter , which may facilitate the adaptation to different clinical needs. The main weaknesses of the method is the need for a prior acquisition of a homogeneous sample to determine the noise variance used in the adaptive compounding of the detail coefficients in the wavelet decomposition. Nevertheless, as described in the estimation of the noise in datasets MUW-2 and A2ASDOCT, this drawback can be avoided through the use of a homogeneous ROI of the input frames. The other line of improvement is the automatic assessment of the parameters of the method ( and the number of frames of the input and reference frames). However, this assessment is needed only once when fine tuning the algorithm in a particular OCT setting. A thorough evaluation was performed by applying the method to four different OCT settings, i.e., around 18 skin volumes in three datasets and 17 retinal volumes in a fourth dataset. We compared our method’s results with those from six state-of-the-art algorithms. The results of a quantitative evaluation based on five different metrics demonstrated that the proposed method achieved the best performance among the tested techniques in suppressing noise. A qualitative visual comparison of images confirmed the relative performance of the proposed method and suggests the potential application of the proposed OCT denoising technique to improving image quality prior to clinical diagnosis based on visual assessment. Future work will involve an automatic process for the assessment of the parameters of the method ( and the number of frames of the input and reference frames) through the training of a machine learning model based on quantitative metrics. Furthermore, a formal qualitative evaluation will be performed by clinical experts tackling the specific needs in their clinical routine, such as the capacity of the method to preserve or enhance the identification of tissue layers and their borders and the ability to visualize structures or to identify vasculatures. DisclosuresThe authors have no relevant financial interests in the paper and no other potential conflicts of interest to disclose. AcknowledgmentsThis work was partly supported by the European Union FP7 project BiopsyPen (Grant No. 611132), the European Regional Development Fund, and the Spanish Ministry of Science, Innovation and Universities (Grant Nos. TEC2015-66978-R and RTI2018-098682- B-I00 MCIU/AEI/FEDER, EU). Code, Data, and Materials AvailabilityWe have made a Matlab implementation of the WCAN method available through Code Ocean, including the full code for the noise estimation and the denoising algorithm, together with raw volume data for testing purposes. ReferencesD. Huang et al.,
“Optical coherence tomography,”
Science, 254
(5035), 1178
–1181
(1991). https://doi.org/10.1126/science.1957169 SCIEAS 0036-8075 Google Scholar
G. J. Jaffe and J. Caprioli,
“Optical coherence tomography to detect and manage retinal disease and glaucoma,”
Am. J. Ophthalmol., 137
(1), 156
–169
(2004). https://doi.org/10.1016/S0002-9394(03)00792-X AJOPAA 0002-9394 Google Scholar
H. G. Bezerra et al.,
“Intracoronary optical coherence tomography: a comprehensive review: clinical and research applications,”
JACC: Cardiovasc. Interventions, 2
(11), 1035
–1046
(2009). https://doi.org/10.1016/j.jcin.2009.06.019 Google Scholar
T. Gambichler, A. Pljakic and L. Schmitz,
“Recent advances in clinical application of optical coherence tomography of human skin,”
Clin. Cosmet. Investig. Dermatol., 8 345
–354
(2015). https://doi.org/10.2147/CCID.S69119 Google Scholar
J. Olsen, J. Holmes and G. B. E. Jemec,
“Advances in optical coherence tomography in dermatology—a review,”
J. Biomed. Opt., 23
(4), 040901
(2018). https://doi.org/10.1117/1.JBO.23.4.040901 JBOPFO 1083-3668 Google Scholar
A. A. Hussain, L. Themstrup and G. B. E. Jemec,
“Optical coherence tomography in the diagnosis of basal cell carcinoma,”
Arch. Dermatol. Res., 307
(1), 1
–10
(2015). https://doi.org/10.1007/s00403-014-1498-y Google Scholar
S. A. Alawi et al.,
“Optical coherence tomography for presurgical margin assessment of non-melanoma skin cancer—a practical approach,”
Exp. Dermatol., 22
(8), 547
–551
(2013). https://doi.org/10.1111/exd.12196 EXDEEY 0906-6705 Google Scholar
O. Markowitz et al.,
“Evaluation of optical coherence tomography as a means of identifying earlier stage basal cell carcinomas while reducing the use of diagnostic biopsy,”
J. Clin. Aesthet. Dermatol., 8
(10), 14
–20
(2015). Google Scholar
J. M. Schmitt, S. H. Xiang and K. M. Yung,
“Speckle in optical coherence tomography,”
J. Biomed. Opt, 4
(1), 95
–105
(1999). https://doi.org/10.1117/1.429925 JBOPFO 1083-3668 Google Scholar
M. Bashkansky and J. Reintjes,
“Statistics and reduction of speckle in optical coherence tomography,”
Opt. Lett., 25
(8), 545
–547
(2000). https://doi.org/10.1364/OL.25.000545 OPLEDP 0146-9592 Google Scholar
P. Lee, W. Gao and X. Zhang,
“Speckle properties of the logarithmically transformed signal in optical coherence tomography,”
J. Opt. Soc. Am. A, 28
(4), 517
–522
(2011). https://doi.org/10.1364/JOSAA.28.000517 JOAOD6 0740-3232 Google Scholar
M. Pircher et al.,
“Speckle reduction in optical coherence tomography by frequency compounding,”
J. Biomed. Opt., 8
(3), 565
–570
(2003). https://doi.org/10.1117/1.1578087 JBOPFO 1083-3668 Google Scholar
A. E. Desjardins et al.,
“Speckle reduction in OCT using massively-parallel detection and frequency-domain ranging,”
Opt. Express, 14
(11), 4736
–4745
(2006). https://doi.org/10.1364/OE.14.004736 OPEXFF 1094-4087 Google Scholar
M. Szkulmowski et al.,
“Efficient reduction of speckle noise in optical coherence tomography,”
Opt. Express, 20
(2), 1337
–1359
(2012). https://doi.org/10.1364/OE.20.001337 OPEXFF 1094-4087 Google Scholar
L. Duan et al.,
“Single-shot speckle noise reduction by interleaved optical coherence tomography,”
J. Biomed. Opt., 19
(12), 120501
(2014). https://doi.org/10.1117/1.JBO.19.12.120501 JBOPFO 1083-3668 Google Scholar
B. Sander et al.,
“Enhanced optical coherence tomography imaging by multiple scan averaging,”
Br. J. Ophthalmol., 89
(2), 207
–212
(2005). https://doi.org/10.1136/bjo.2004.045989 BJOPAL 0007-1161 Google Scholar
Y. Winetraub et al.,
“Upper limit for angular compounding speckle reduction,”
Appl. Phys. Lett., 114
(21), 211101
(2019). https://doi.org/10.1063/1.5088709 APPLAB 0003-6951 Google Scholar
J. Zhao et al.,
“Angular compounding for speckle reduction in optical coherence tomography using geometric image registration algorithm and digital focusing,”
Sci. Rep., 10
(1), 1893
(2020). https://doi.org/10.1038/s41598-020-58454-0 SRCEC3 2045-2322 Google Scholar
M. Pircher et al.,
“Measurement and imaging of water concentration in human cornea with differential absorption optical coherence tomography,”
Opt. Express, 11
(18), 2190
–2197
(2003). https://doi.org/10.1364/OE.11.002190 OPEXFF 1094-4087 Google Scholar
J. Rogowska and M. E. Brezinski,
“Evaluation of the adaptive speckle suppression filter for coronary optical coherence tomography imaging,”
IEEE Trans. Med. Imaging, 19
(12), 1261
–1266
(2000). https://doi.org/10.1109/42.897820 ITMID4 0278-0062 Google Scholar
D. L. Marks, T. S. Ralston and S. A. Boppart,
“Speckle reduction by I-divergence regularization in optical coherence tomography,”
J. Opt. Soc. Am. A, 22
(11), 2366
–2371
(2005). https://doi.org/10.1364/JOSAA.22.002366 JOAOD6 0740-3232 Google Scholar
D. C. Fernández, H. M. Salinas and C. A. Puliafito,
“Automated detection of retinal layer structures on optical coherence tomography images,”
Opt. Express, 13
(25), 10200
–10216
(2005). https://doi.org/10.1364/OPEX.13.010200 OPEXFF 1094-4087 Google Scholar
Z. Jian et al.,
“Three-dimensional speckle suppression in optical coherence tomography based on the curvelet transform,”
Opt. Express, 18
(2), 1024
–1032
(2010). https://doi.org/10.1364/OE.18.001024 OPEXFF 1094-4087 Google Scholar
L. Fang et al.,
“Sparsity based denoising of spectral domain optical coherence tomography images,”
Biomed. Opt. Express, 3
(5), 927
–942
(2012). https://doi.org/10.1364/BOE.3.000927 BOEICL 2156-7085 Google Scholar
H. Chen et al.,
“Speckle attenuation by adaptive singular value shrinking with generalized likelihood matching in optical coherence tomography,”
J. Biomed. Opt., 23
(3), 036014
(2018). https://doi.org/10.1117/1.JBO.23.3.036014 JBOPFO 1083-3668 Google Scholar
P. G. Daneshmand, A. Mehridehnavi and H. Rabbani,
“Reconstruction of optical coherence tomography images using mixed low rank approximation and second order tensor based total variation method,”
IEEE Trans. Med. Imaging, 40
(3), 865
–878
(2021). https://doi.org/10.1109/TMI.2020.3040270 Google Scholar
Z. Amini, H. Rabbani and I. Selesnick,
“Sparse domain gaussianization for multi-variate statistical modeling of retinal OCT images,”
IEEE Trans. Image Process., 29 6873
–6884
(2020). https://doi.org/10.1109/TIP.2020.2994454 IIPRE4 1057-7149 Google Scholar
S. Adabi et al.,
“A learnable despeckling framework for optical coherence tomography images,”
J. Biomed. Opt., 23 016013
(2018). https://doi.org/10.1117/1.JBO.23.1.016013 Google Scholar
M. H. Eybposh et al.,
“Cluster-based filtering framework for speckle reduction in OCT images,”
Biomed. Opt. Express, 9
(12), 6359
–6373
(2018). https://doi.org/10.1364/BOE.9.006359 BOEICL 2156-7085 Google Scholar
M. Gargesha et al.,
“Denoising and 4D visualization of OCT images,”
Opt. Express, 16
(16), 12313
–12333
(2008). https://doi.org/10.1364/OE.16.012313 OPEXFF 1094-4087 Google Scholar
B. Chong and Y.-K. Zhu,
“Speckle reduction in optical coherence tomography images of human finger skin by wavelet modified BM3D filter,”
Opt. Commun., 291
(Suppl. C), 461
–469
(2013). https://doi.org/10.1016/j.optcom.2012.10.053 OPCOB8 0030-4018 Google Scholar
H. Yu, J. Gao and A. Li,
“Probability-based non-local means filter for speckle noise suppression in optical coherence tomography images,”
Opt. Lett., 41
(5), 994
–997
(2016). https://doi.org/10.1364/OL.41.000994 OPLEDP 0146-9592 Google Scholar
J. Aum, J. Kim and J. Jeong,
“Effective speckle noise suppression in optical coherence tomography images using nonlocal means denoising filter with double Gaussian anisotropic kernels,”
Appl. Opt., 54
(13), D43
–D50
(2015). https://doi.org/10.1364/AO.54.000D43 APOPAI 0003-6935 Google Scholar
K. Zhang et al.,
“Beyond a Gaussian denoiser: residual learning of deep CNN for image denoising,”
IEEE Trans. Image Process., 26
(7), 3142
–3155
(2017). https://doi.org/10.1109/TIP.2017.2662206 IIPRE4 1057-7149 Google Scholar
N. Cai et al.,
“A ResNet-based universal method for speckle reduction in optical coherence tomography images,”
in Proc. IEEE Int. Symp. Biomed. Imaging (ISBI),
(2018). Google Scholar
Y. Ma et al.,
“Speckle noise reduction in optical coherence tomography images based on edge-sensitive cGAN,”
Biomed. Opt. Express, 9
(11), 5129
–5146
(2018). https://doi.org/10.1364/BOE.9.005129 BOEICL 2156-7085 Google Scholar
Y. Huang et al.,
“Simultaneous denoising and super-resolution of optical coherence tomography images based on generative adversarial network,”
Opt. Express, 27
(9), 12289
–12307
(2019). https://doi.org/10.1364/OE.27.012289 OPEXFF 1094-4087 Google Scholar
B. Qiu et al.,
“Noise reduction in optical coherence tomography images using a deep neural network with perceptually-sensitive loss function,”
Biomed. Opt. Express, 11
(2), 817
–830
(2020). https://doi.org/10.1364/BOE.379551 BOEICL 2156-7085 Google Scholar
N. A. Kande et al.,
“SiameseGAN: a generative model for denoising of spectral domain optical coherence tomography images,”
IEEE Trans. Med. Imaging, 40
(1), 180
–192
(2021). https://doi.org/10.1109/TMI.2020.3024097 ITMID4 0278-0062 Google Scholar
J.-L. Starck, F. D. Murtagh and A. Bijaoui, Image Processing and Data Analysis: The Multiscale Approach, Cambridge University Press(1998). Google Scholar
D. C. Adler, T. H. Ko and J. G. Fujimoto,
“Speckle reduction in optical coherence tomography images by use of a spatially adaptive wavelet filter,”
Opt. Lett., 29
(24), 2878
–2880
(2004). https://doi.org/10.1364/OL.29.002878 OPLEDP 0146-9592 Google Scholar
S. Chitchian, M. A. Fiddy and N. M. Fried,
“Denoising during optical coherence tomography of the prostate nerves via wavelet shrinkage using dual-tree complex wavelet transform,”
J. Biomed. Opt., 14
(1), 014031
(2009). https://doi.org/10.1117/1.3081543 JBOPFO 1083-3668 Google Scholar
F. Zaki et al.,
“Noise adaptive wavelet thresholding for speckle noise removal in optical coherence tomography,”
Biomed. Opt. Express, 8
(5), 2720
–2731
(2017). https://doi.org/10.1364/BOE.8.002720 BOEICL 2156-7085 Google Scholar
A. Ozcan et al.,
“Speckle reduction in optical coherence tomography images using digital filtering,”
J. Opt. Soc. Am. A, 24
(7), 1901
–1910
(2007). https://doi.org/10.1364/JOSAA.24.001901 JOAOD6 0740-3232 Google Scholar
M. A. Mayer et al.,
“Wavelet denoising of multiframe optical coherence tomography data,”
Biomed. Opt. Express, 3
(3), 572
–589
(2012). https://doi.org/10.1364/BOE.3.000572 BOEICL 2156-7085 Google Scholar
R. Kafieh, H. Rabbani and I. Selesnick,
“Three dimensional data-driven multi scale atomic representation of optical coherence tomography,”
IEEE Trans. Med. Imaging, 34
(5), 1042
–1062
(2015). https://doi.org/10.1109/TMI.2014.2374354 ITMID4 0278-0062 Google Scholar
C. Cuartas-Vélez et al.,
“Volumetric non-local-means based speckle reduction for optical coherence tomography,”
Biomed. Opt. Express, 9
(7), 3354
–3372
(2018). https://doi.org/10.1364/BOE.9.003354 BOEICL 2156-7085 Google Scholar
J. M. Schmitt and A. Knüttel,
“Model of optical coherence tomography of heterogeneous tissue,”
J. Opt. Soc. Am. A, 14
(6), 1231
–1242
(1997). https://doi.org/10.1364/JOSAA.14.001231 JOAOD6 0740-3232 Google Scholar
X. Wang et al.,
“A two-step iteration mechanism for speckle reduction in optical coherence tomography,”
Biomed. Signal Process. Control, 43 86
–95
(2018). https://doi.org/10.1016/j.bspc.2018.02.011 Google Scholar
J. Duan et al.,
“Denoising optical coherence tomography using second order total generalized variation decomposition,”
Biomed. Signal Process. Control, 24 120
–127
(2016). https://doi.org/10.1016/j.bspc.2015.09.012 Google Scholar
J. Xu et al.,
“Wavelet domain compounding for speckle reduction in optical coherence tomography,”
J. Biomed. Opt., 18
(9), 096002
(2013). https://doi.org/10.1117/1.JBO.18.9.096002 JBOPFO 1083-3668 Google Scholar
X. Li, S. Liang and J. Zhang,
“Acceleration of OCT signal processing with lookup table method for logarithmic transformation,”
Appl. Sci., 9
(7), 1278
(2019). https://doi.org/10.3390/app9071278 Google Scholar
P. S. Hiremath, P. T. Akkasaligar and S. Badiger,
“Speckle noise reduction in medical ultrasound images,”
Advancements and Breakthroughs in Ultrasound Imaging, IntechOpen(2013). Google Scholar
S. Gai et al.,
“Speckle noise reduction in medical ultrasound image using monogenic wavelet and Laplace mixture distribution,”
Digital Signal Process., 72 192
–207
(2018). https://doi.org/10.1016/j.dsp.2017.10.006 DSPREJ 1051-2004 Google Scholar
Z. Chen et al.,
“Phase-stable swept source OCT angiography in human skin using an akinetic source,”
Biomed. Opt. Express, 7
(8), 3032
–3048
(2016). https://doi.org/10.1364/BOE.7.003032 BOEICL 2156-7085 Google Scholar
Z. Chen et al.,
“Non-invasive multimodal optical coherence and photoacoustic tomography for human skin imaging,”
Sci. Rep., 7
(1), 17975
(2017). https://doi.org/10.1038/s41598-017-18331-9 SRCEC3 2045-2322 Google Scholar
A. Aneesh et al.,
“Multispectral in vivo three-dimensional optical coherence tomography of human skin,”
J. Biomed. Opt., 15
(2), 026025
(2010). https://doi.org/10.1117/1.3400665 JBOPFO 1083-3668 Google Scholar
A. A. Khanifar and S. Farsiu,
“Spectral-domain OCT in practice,”
(2008) https://retinatoday.com/articles/2008-may/0508_10-php Google Scholar
“AREDS 2 Ancillary Spectral Domain Optical Coherence Tomography Study (A2ASDOCT),”
https://clinicaltrials.gov/ct2/show/NCT00734487 Google Scholar
BiographyJuan J. Gómez-Valverde is an assistant professor at the Universidad Politécnica de Madrid (UPM) and a researcher in the Biomedical Research Networking Center in bioengineering, biomaterials, and nanomedicine. He graduated in telecommunication engineering and completed a master’s program in biomedical engineering at the UPM. He obtained his PhD with international mention from UPM. His main research interests are related to screening for diseases, biomedical image processing, artificial intelligence, and optical image analysis. Zhe Chen received his PhD from the Medical University of Vienna in 2020. His research area includes dual modality optical coherence tomography angiography (OCTA) and photoacoustics tomography (PAT) imaging techniques. He is currently working in optical and architecture design of laser beam scanners used in the augmented reality area. Andrés Santos received his MEng degree and PhD from UPM where he is currently a professor and the director of the Biomedical Image Technologies lab. His main research areas are biomedical image analysis and acquisition. He has coauthored over 90 papers in indexed journals on medical imaging and has coordinated more than 50 research projects and technology transfer contracts. He has been co-director of a master’s program on biomedical technology and instrumentation. Wolfgang Drexler is a full professor and since 2009 head of the Center for Medical Physics and Biomedical Engineering after having been full professor of biomedical imaging at Cardiff University, UK, from 2006 to 2009. He also spent two years at the MIT, Cambridge, from 1998 to 1999. He received the Austrian START Award in 2001 and the COGAN Award in 2007. His h-index is 72 (Scopus) and his research grant income since 2000 is €17 million. María J. Ledesma-Carbayo received her MEng degree in telecommunication engineering and her PhD from UPM specializing in medical image analysis. She additionally completed two master’s programs in biomedical engineering and medical physics. She is currently a full professor at UPM. She has authored over 150 publications in indexed journals and conferences. Her research interests include cardiac and pulmonary imaging, image-guided therapy, microscopy image analysis, registration, motion estimation, and compensation. |