Open Access
17 June 2021 Adaptive compounding speckle-noise-reduction filter for optical coherence tomography images
Author Affiliations +
Abstract

Significance: Speckle noise limits the diagnostic capabilities of optical coherence tomography (OCT) images, causing both a reduction in contrast and a less accurate assessment of the microstructural morphology of the tissue.

Aim: We present a speckle-noise reduction method for OCT volumes that exploits the advantages of adaptive-noise wavelet thresholding with a wavelet compounding method applied to several frames acquired from consecutive positions. The method takes advantage of the wavelet representation of the speckle statistics, calculated properly from a homogeneous sample or a region of the noisy volume.

Approach: The proposed method was first compared quantitatively with different state-of-the-art approaches by being applied to three different clinical dermatological OCT volumes with three different OCT settings. The method was also applied to a public retinal spectral-domain OCT dataset to demonstrate its applicability to different imaging modalities.

Results: The results based on four different metrics demonstrate that the proposed method achieved the best performance among the tested techniques in suppressing noise and preserving structural information.

Conclusions: The proposed OCT denoising technique has the potential to adapt to different image OCT settings and noise environments and to improve image quality prior to clinical diagnosis based on visual assessment.

1.

Introduction

Optical coherence tomography (OCT)1 is an optical imaging technique that allows cross-sectional views of in vivo tissue in real time with micrometer resolution and at depths of up to two millimeters. OCT is widely used in a variety of biomedical and clinical fields, such as ophthalmology (as a routine noninvasive tool for the diagnosis and monitoring of disease progression)2 and cardiology (as a catheter-based imaging system during coronary intervention).3 In dermatology, OCT has been used to study a variety of dermatological disorders.4,5 The most significant dermatological applications of OCT are in assessing non-melanoma skin cancers such as basal cell carcinoma (BCC) and in reducing the need for diagnostic biopsies.68

Because it uses spatially coherent illumination, OCT images are affected by the speckle phenomenon, which has a dual role as both a source of noise and a carrier of information.9,10 Speckle carrying signal information is the result of the back scattering of the incident photons, whereas speckle noise is caused by the random interference between multiple reflected photons coming from multiple directions. Speckle noise gives a grainy appearance to the OCT images, which degrades the signal-to-noise ratio (SNR) and limits the accuracy of its interpretation. The speckle properties are affected by the scale representation, the optical settings, and the scattering properties of the biological tissue.11

Speckle denoising is an active and widespread research field developed during recent years. There are two main approaches: those that modify the image configuration (where the optical setting or the scanning protocol can be adjusted) and those that are based on postprocessing the images via digital algorithms. In the first group, there are three main approaches to the compounding techniques: using the frequency,12 using the angle of the incident light source,13 and using multiple A-lines collected in a controlled way.14 Other studies have proposed methods for combining image processing and modifications to the optical configuration, with the aim of increasing the SNR even further.15 To compensate for the effect of possible movement in the case of in vivo imaging, two strategies for reducing motion artifacts have been proposed: using a weighted average of individual A-scans16 and using a combination of the average of B-scans (from a high-speed CMOS line-scan camera) and a cross correlation of different frames with respect to a fixed reference. One recent report suggests that SNR improvements via pure angular compounding techniques will be limited by optical aberrations.17 To overcome these limitations, a recent study proposed the combination of angular compounding with geometric image registration and digital focusing.18

The main advantage of digital speckle-reduction techniques is that these can be applied to almost all two-dimensional (2D) and three-dimensional (3D) images acquired by an OCT device without changing the acquisition setup. However, they usually add to the computation requirements and can affect the resolution of the image. These techniques can involve the combined use of several methods such as local averaging over neighboring A-scans of each tomogram,19 averaging multiple B-scans,16 applying rotation kernel transformations to each tomogram,20 image regularization,21 complex diffusion filtering,22 curvelet transforms,23 applying sparse representations or low rank models using patches in the images,2427 autoencoding based on a neural network that learns from a figure of merit,28 digital filtering clusters of pixels with similar optical properties,29 and adaptive nonlinear diffusion filtering.30 State-of-the-art general-purpose denoising filters such as probability-based non-local-means (PNLM) and block-matching 3D filtering have also been adapted successfully for the removal of noise from OCT images.3133 Recently, approaches based on deep learning, such as denoising convolutional neural network (DNCNN),34 have been proposed for natural image denoising and for speckle noise in OCT,3539 showing the potential of these techniques in a variety of OCT image types.

Algorithms based on filtering in the wavelet domain40 have shown excellent performance in speckle noise removal. One approach used in these methods is to filter the detail wavelet coefficients in multiple subbands to minimize the noise. The calculation of an appropriate threshold and its application can be performed using spatially adaptive soft-thresholding with estimation of the noise in one subband.41,42 However, a recent study shows that speckle noise can have different magnitudes for different wavelet subbands and that estimation of the noise variance at individual scales can improve the characterization of the threshold and can improve the speckle noise removal.43 Another successful strategy is the compounding of several frames in combination with digital filtering,44 more specifically with frames previously filtered using wavelet denoising.45 The use of volumetric data collected from consecutive B-scans has also been exploited by other methods23,46,47 as an additional source of information for use in speckle reduction.

Here, we present a method for speckle reduction of OCT images that effectively combines an adaptive noise strategy with volumetric wavelet compounding. The method first processes several frames acquired at consecutive locations, using a multiscale noise adaptive wavelet filter, followed by an appropriately weighted computation for the compounding. Following Zaki et al.,43 we estimate the noise-variance wavelet representation in a homogeneous scattering sample and use it to filter the noise in all frames. To take advantage of complementary information brought from several acquisitions, we used Mayer et al.’s45 compounding approach as a second step, applying a weighting method to select the detail coefficients of each subband before compounding all of the filtered frames. Our method also benefits from the fact that the wavelet representation of the speckle statistics, calculated properly from a homogeneous sample or a region of the noisy volume previously recorded, can be used to characterize the noise pattern of the system. These reference statistics are a valuable asset when differentiating information from speckle noise. In addition, the use of several frames and the wavelet weighted compounding improves speckle removal and enhances the structural details, resulting in a superior performance to that of other state-of-the-art methods.

The rest of the paper is structured in several sections. In Sec. 2, we describe in detail the proposed method, the OCT imaging systems used, the datasets, the performance metrics, and the quantitative evaluation process used in the assessment. In Sec. 3, we report the results and discussion of the parameter assessment and the quantitative evaluation of the algorithm. Finally, Sec. 4 contains the conclusions of the study.

2.

Materials and Methods

2.1.

Wavelet-Compounding Adaptive Noise Filter

Figure 1 shows a complete overview of the proposed wavelet-compounding adaptive noise filter (WCAN) method. The processing steps are wavelet transform, variance computation of the input and reference frames, adaptive variance compounding, weight computation, wavelet coefficient weighting, averaging, and inverse wavelet transform. The input to the algorithm is a set of N OCT frames (IF1,IF2,IFN) and a set of M reference OCT frames (RF1,RF2,RFM), with both sets acquired at consecutive positions. The reference frames could come from a homogeneous scattering sample independent of the input frames or from a homogeneous region of the set of input frames. This reference set should be collected as a prior step to the application of the method. The output of the algorithm is a single OCT denoised image. The terms “frame” and “image” from one side and the terms “method,” “filter,” and “algorithm” are used interchangeably in this paper. In the following paragraphs, we explain in detail each step of the method.

Fig. 1

Flow diagram of the wavelet-compounding adaptive noise algorithm. The acquired OCT input frames IF1,IF2,IFN represent the N input noisy images for the algorithm. All images and the M reference OCT frames RF1,RF2,RFM are wavelet decomposed in L subbands. The variance of all subbands in the wavelet decomposition of the reference stack is computed. Using the subband variances, the weights of the detail coefficients for each subband are calculated based on the compounding of the previous variance computation and applied to each original detail coefficient. All denoised coefficients are then averaged in the wavelet domain before the inverse wavelet transform is calculated to obtain the final denoised OCT image, DOCT.

JBO_26_6_065001_f001.png

2.1.1.

OCT image noise

The speckle noise is usually modeled as multiplicative due to the multiple backscattering effects.48 In addition to the speckle noise, OCT systems are also affected by noise coming from different sources such as inherent laser intensity noise, photonics shot noise, or thermal noise from electronics. These other types can be assumed to be white Gaussian additive noise.49 We define an OCT image I* as

Eq. (1)

I*=Ns·I+Nw,
where I is the noise-free image and Ns and Nw are the speckle noise and the Gaussian white noise, respectively. We can neglect the additive white noise because it is significantly small compared with the multiplicative noise.50 Thus, we model the OCT image as

Eq. (2)

I*Ns·I.

To convert the multiplicative noise into additive noise, we consider that all of the OCT images are logarithmic transformed following the approach in similar studies.5054 This operation allows us to assume a near additive noise model after the logarithmic transformation. We assume also that speckle noise present in frames recorded at different consecutive positions is uncorrelated.45

2.1.2.

Wavelet transform

All of the input images are decomposed by a wavelet transformation with a maximum decomposition level of L. The result of the transformation is a set of approximation coefficients Ail and detail coefficients Ci,Ol, where i is the frame number, l is the decomposition level, and O is the orientation or direction of the detail coefficient (horizontal, vertical, or diagonal). The wavelet transformation is computed using the 2D discrete stationary wavelet transform.40 Figure 2 shows a representation of a 2D wavelet decomposition of an image with three levels, where A contains the approximation coefficients and CHorizontal, CVertical, and CDiagonal are the detail coefficients of each subband for the horizontal, vertical, and diagonal orientation, respectively.

Fig. 2

Representation of a 2D wavelet decomposition of an image using three levels.

JBO_26_6_065001_f002.png

2.1.3.

Variance computation

The next step is the calculation of the variance of the detail coefficients of all of the subbands of the input and reference frames. For the computation of the variance of the detail coefficients of each subband σOl2, we use the following expressions:

Eq. (3)

σOl2=1X1x=1X|COl(x)μOl|2,

Eq. (4)

μOl=1Xk=1XCOl(x),
where l is the subband level (between 1 and L), O is the orientation (horizontal, vertical, or diagonal) of the detail coefficient C, and x is an index ranging from 1 to X (total number of coefficients with orientation O for the subband l). Equation (3) uses μOl, the mean of all detail coefficients for the orientation O and subband l, calculated using Eq. (4). In the case of the reference frames, we calculate the variance, σO,Rl2, considering all M frames, and therefore, X ranging in this case, from 1 to the number of coefficients by subband and by orientation plus the number of frames M.

2.1.4.

Adaptive variance compounding

Then, we calculate the adaptive compounding terms GO,il, which consider an average of the ratio of the variance terms of the initial frames and the reference frames, including also the detail coefficients for the variance of the frame i, using the following expressions:

Eq. (5)

GO,il(x)=1N(|CO,il(x)|σO,il2σO,Rl2+j=1jiN1σO,il2σO,Rl2),
where l is the subband, i is the frame number ranging from 1 to N (total number of input frames, see Fig. 1) and O is the orientation of the detail coefficient (horizontal, vertical, or diagonal). The term CO,il is the detail coefficient of the subband l, orientation O, and frame i. The terms σO,il2 and σO,Rl2 are the variance of the detail coefficients from the frame i and from the reference frames respectively, calculated using Eq. (3).

2.1.5.

Weight computation

The weights provide estimates of the noise contribution of each subband to every frame in relation to the reference noise and to the remaining frames. The goal is to reduce the detail coefficients in each subband with higher variance compared with the reference values and the remaining frames, assuming that noise will be the main cause of this higher contribution. The term in Eq. 6 is an adaptive variance threshold defined in a similar way to standard wavelet thresholding, but considering the variance of the reference frames in each subband.

Eq. (6)

TO,il=σO,Rl2|σO,il2σO,Rl2|,

Eq. (7)

WO,il(x)={1,|CO,il(x)|>k·TO,il1GO,il(x),otherwise

Equation (7) produces the final weights WO,il for each subband l, frame i, and orientation O, comparing the value of the detail coefficients CO,il with the previous threshold value TO,il. The parameter k will then be used to balance the final amount of noise reduction (in Sec. 3.1, we present more detail of the influence and utility of this parameter). The ratio GO,il from Eq. (5) is used to reduce the weight of the detail coefficients lower than the threshold TO,il.

2.1.6.

Wavelet coefficient weighting, averaging, and inverse wavelet transform

The new detail coefficients C˜O,il of each subband l, orientation O, and initial frame i are computed using the previous weights for all positions x, via the following expression:

Eq. (8)

C˜O,il(x)=CO,il(x)·WO,il(x).

The detail COl and the approximation Al coefficients of the denoised image (see Fig. 2) for each subband l are calculated by averaging the coefficients of the N initial frames considered during the process:

Eq. (9)

COl(x)=1Ni=1NC˜O,il(x),

Eq. (10)

Al(x)=1Ni=1NAil(x).

Finally, the denoised image is computed by applying the inverse wavelet transform over the averaged coefficients. The algorithm was implemented in Matlab 2018b (MathWorks, Inc., Natick, Massachusetts) on a personal computer (Intel 3.3 GHz CPU, 32 GB memory).

2.1.7.

Data and OCT imaging systems

For the quantitative evaluation of the method, we used datasets representing four different environments, as specified in Table 1. Datasets Medical University of Vienna (MUW)-1, MUW-2, and MUW-3 each comprised 18 skin OCT volumes and were acquired via three different OCT image systems. MUW-1 and MUW-2 were custom designs from the Center for Medical Physics and Biomedical Engineering at the MUW. To assess the applicability of the method to a commercial device, the MUW-3 image dataset was acquired via a VivoSight OCT scanner (Michelson Diagnostics, Kent). The A2ASDOCT dataset is a public dataset comprising 17 retinal volumes from the A2ASDOCT study.59 It was produced using a Bioptigen Inc (Research Triangle Park, North Carolina) spectral domain OCT imaging system24 and involved 17 eyes from 17 subjects with and without nonneovascular age-related macular degeneration (AMD). Each set of images in this dataset comprises five frames of 900×450  pixels from consecutive positions, each of which includes the fovea region. The main technical specifications of the imaging systems are summarized in Table 1. Table 2 gives additional details about the dermatological datasets (MUW-1, MUW-2, and MUW-3) and the retinal dataset (A2ASDOCT).

Table 1

Technical specifications of the OCT imaging systems used in the acquisition of the volumes in this study.

ParameterMUW-155,56MUW-257MUW-3 (VivoSight)A2ASDOCT (Bioptigen)58
System typeSwept-source frequency domain OCTSpectral domain OCTMulti-beam swept-source frequency domain OCTSpectral domain OCT
Laser system1340 nm1320 nm1305 nm Class 1840 nm
A-line rate200 kHz47 kHz10 kHz17 kHz
Optical resolution19.5  μm (air) lateral
4.6  μm (air) axial
15  μm (lateral)
7  μm (axial)
<7.5  μm (tissue) lateral
<5  μm (tissue) axial
10  μm (tissue) lateral
4.5  μm (tissue) axial
Field of view10×10  mm7×3.5  mm6×6  mm6.7×6.7  mm
Imaging depth1.2 mm2 mmBetween 1.2 and 2 mm, tissue dependent2 mm

Table 2

Description of the datasets. The volumes of the three dermatological datasets were acquired and evaluated by experts from the Department of Dermatology at the MUW. The identified sample area (if available) is given in the fourth column. The results of the clinical assessments are given in the fifth column.

DatasetSize (pixels)/number of B-scansVolume IDSample areaClinical evaluation
MUW-11100×512/2561.1HandNevus araneus
1.2BackPostsurgical scar
1.3ThighNormal skin
1.4HeadBCC
1.5ArmBCC
MUW-21000×680/2562.1PalmNormal skin
2.2Pityriasis
2.3ForearmNormal skin
2.4HeadBCC
2.5ArmNevus
MUW-31558×460/503.1NoseBCC
3.2ChestBCC
3.3ChestBCC
3.4PalmNormal skin
1298×460/503.5FingertipNormal skin
1038×460/503.6BackNevus
3.7AbdomenAngioma
3.8CheekFolliculitis
A2ASDOCT900×450/54.1–4.17Fovea10 from normal subjects and 7 from AMD subjects

The method requires the use of a reference set (RF1,RF2,RFM in Fig. 1). The role of this reference is to characterize the speckle noise assuming, as stated by Zaki et al.,43 that the variation in these uniform areas is determined by random noise. The calculation of the noise variance is based on Eq. 3 over a region of interest (ROI) selected from the reference frames. We used two strategies to calculate this variance. First, we used a homogeneous scattering phantom made of synthetic clay (Blu-Tak®; Bostik, Wauwatosa, Wisconsin). This approach was tested with the MUW-1 OCT image system and the MUW-3 VivoSight device. Using the MUW-1 system, we acquired 512 -scans with dimensions of 512×219  pixels from consecutive positions. Using the VivoSight device, we acquired 120 B-scans with dimensions of 1038×460  pixels from consecutive positions.

In our second strategy, we selected a ROI in the same location (top or bottom) of the frames from both systems and computed the variances of the coefficients of four wavelet detail subbands. This involved selecting a homogeneous noisy ROI and computing the corresponding variance of the detail coefficients, considering all frames in the dataset. We selected the location of the ROI (top or bottom) to have enough data in the all frames of the volume. We used this strategy as an alternative approach for the MUW-2 and A2ASDOCT datasets, partly because a phantom was unavailable and partly to test an alternative approach to noise estimation. Figure 3 shows examples of the acquisition of the phantom and the selection of the ROIs for both strategies, and Table 3 summarizes the reference sets used for each OCT imaging system. We applied the same reference set to all volumes of each dataset (MUW-1, MUW-2, MUW-3, and A2ASDOCT).

Fig. 3

Examples of frames and ROIs (white rectangles) used to compute the noise variance required by the WCAN method. (a) First frame with size 512×219  pixels for the phantom acquired via the MUW-1 image system. (b) First frame with size 1000×680  pixels for Volume 2.1 (palm normal skin, see Table 2) acquired via the MUW-2 image system. (c) First frame with size 1038×460  pixels for the phantom acquired via the VivoSight device. (d) First frame with size 900×450  pixels for the first set in the A2ASDOCT dataset.

JBO_26_6_065001_f003.png

Table 3

Summary of the reference sets used in the application of the method WCAN for all of the datasets. M indicates the maximum number of frames in the reference set (RF1,RF2…RFM). The ROI Dimensions column refers to the size of the white rectangles shown in Fig. 3.

DatasetVolume IDStrategyMROI dimensionsFigure
MUW-1R.1ROI phantom51263×478Fig. 3a
MUW-2R.2ROI Vol 2.1256112×993Fig. 3b
MUW-3R.3ROI phantom120120×1009Fig. 3c
A2ASDOCTR.4ROI all frames85153×885Fig. 3d

2.2.

Quantitative evaluation

We evaluated the efficacy of the algorithm, using common speckle-reduction performance metrics41,43,44 via a comparison with different state-of-the-art methods. These quantitative metrics were the SNR, the contrast-to-noise ratio (CNR), and the equivalent number of looks (ENL), as expressed below.

Eq. (11)

SNR=10log10(μI2σI2),

Eq. (12)

CNR=1Rr=1Rμrμbσr2+σb2,

Eq. (13)

ENL=1Hh=1Hμh2σh2.

SNR is defined in Eq. (11), where μI indicates the mean of the OCT image and σI2 refers to the noise variance. In the definition of CNR [Eq. (12)], μb and σb2 are the mean and variance for a background noise region, respectively, with μr and σr2 being the mean and variance for all ROIs, respectively, including homogeneous and heterogeneous regions. ENL is a measure of the smoothness of a homogeneous ROI. In Eq (13), μh and σh2 are the mean and variance of all H homogeneous ROIs, respectively. Except for the SNR calculations, the remaining parameters were computed from the logarithmic OCT images.

To assess the overall influence of the quantitative metrics, we include two additional figure-of-merits (FOMs):44

Eq. (14)

FOMSUM=SNRNorm+CNRNorm+ENLNorm,

Eq. (15)

FOMMIN=Min(SNRNorm,CNRNorm,ENLNorm),
where Norm refers to the normalization of the metric, that is, the method that performed the best in, for example, the SNR criteria, would have an SNRNorm equal to one. Therefore, an FOMSUM of three would indicate that the method performed the best in all image-quality metrics (SNR, CNR, and ENL). The metric FOMMIN is used to assess the robustness of the methods and to detect unbalanced combinations of SNR, CNR, and ENL.

We compared the performance of the WCAN algorithm with respect to other state-of-the-art methods via two different approaches. First, we used 2D filters that have previously demonstrated excellent performance in OCT speckle reduction such as PNLM,32 complex wavelet based K-SVD (KSVD),46 and noise adaptive wavelet thresholding (NAWT)43 or in general image denoising (DNCNN).34 Next, we used two 3D filters: i.e., the wavelet multiframe algorithm (WVMF)45 and TNODE.47 Table 4 gives sources for implementations of these methods. Figure 4 shows the process of evaluation for the various methods. All steps were performed over all frames in every dataset. The denoised OCT frames generated as outputs were compared using the SNR, CNR, and ENL performance metrics.

Table 4

State-of-the-art denoising software used in the assessment of the proposed method.

YearDenoising methodOfficial website
2012WVMF45Image denoising algorithms archive.
2015KSVD46State-of-the-art method for OCT denoising.
2016PNLM32PNLM: a probability-based non-local means filter for speckle noise suppression in optical coherence tomography images.
2017DNCNN34Beyond gaussian denoiser: residual learning of deep CNN for image denoising.
2017NAWT43Implemented following the authors’ manuscript.
2018TNODE47Volumetric non-local-means based speckle reduction for optical coherence tomography.

Fig. 4

(a) Process used for the evaluation of all methods with all datasets. Each volume in the study was divided into blocks of N frames. For each block, all filters were applied and the evaluation metrics were computed. Finally, all blocks were combined and the improved version of each volume and each filter was created. (b) Details of the process for each block. The compounding of each block with N initial frames (IF1,IFN) was performed using WCAN, WVMF, TNODE, and MEAN. The output of the MEAN was applied to the methods KSVD, NAWT, PNLM, and DNCNN. The output frames from each method (OFWCAN, OFWVMF, OFTNODE, OFKSVD, OFNAWT, OFPNLM, and OFDNCNN), as applied to all frames, were used in the quantitative assessment. The normalized value of the metrics (SNR Norm, CNR Norm, ENL Norm, and FOM) were calculated with respect to the maximum value of each metric from the output frames. We used these normalized values for all volumes in comparing the quantitative assessments for all filters.

JBO_26_6_065001_f004.png

3.

Results and Discussion

3.1.

Parameter Assessment

Before applying the method to all datasets and comparing it with the state-of-the-art filters, we evaluated the impact of the two main parameters of the algorithm, i.e., k [the threshold parameter in Eq. (3)] and N (the number of frames to consider in the compounding process). We used Volume 2.1 (see Table 2) and the reference frames from Volume R.2 (Table 3) for this purpose in the two experiments outlined in Fig. 5.

Fig. 5

Process followed for the assessment of the parameters k and N using Volume 2.1 from the MUW-2 dataset and the reference frames from Volume R.2. (a) Evaluation of the parameter k with N=3 for values of k from 0.3 to 1.4 in steps of 0.1. We divided the volume into blocks of three frames and applied WCAN for each k value. After calculating the quantitative metrics, we compared the resulting FOMs of the enhanced frames for consecutives values of k. (b) Evaluation of the parameter N with k=1 for values of N from 2 to 6. For each N value, we divided the volume into blocks of N frames and applied WCAN with k set to 1. After calculating the quantitative metrics, we compared the resulting FOM of the enhanced frames for consecutives values of N.

JBO_26_6_065001_f005.png

First, we varied the parameter k from 0.3 to 1.5 in steps of 0.1, with N set to 3. In the second experiment, we varied the parameter N from 2 to 6, with k set to 1.0. After applying the WCAN method in both cases, we calculated the quantitative metrics (SNR, CNR, and ENL) and the resulting FOMSUM (only FOM for the rest of this section). We then computed the difference in the FOM values for each enhanced frame for two consecutive values of k (such as k=0.4 and 0.3) and N (such as N=3 and N=2). This process generated a set of FOM-value differences ΔFOMK for each pair of consecutive k values and a set of ΔFOMN for each pair of consecutive N values.

Figure 6 gives the main results and some examples of the application of the WCAN method to Volume 2.1. We observe that the improvement in the quantitative metrics reaches a maximum between k=0.8 and 0.9, with a mean improvement in FOM of 0.213, and any further increase in k decreases the incremental improvement. Note that this does not imply that k=0.9 is the optimal value to use but that k=0.9 is the value for which we have the greatest improvement with respect to the previous value of k. Although increasing the value of k always produces a quantitative improvement in the image, there are diminishing returns from increasing k beyond k=0.9. We can observe the same trend for the parameter N, the maximum mean improvement of FOM, i.e., ΔFOMN=0.229, occurs between N=4 and N=3. The image examples in Fig. 6 show how increases in N and k produce better noise reduction in the resultant image. One of the main advantages of the proposed method is that it is easily adaptable to very different OCT settings by simply adjusting these two parameters.

Fig. 6

Results of the evaluation of the parameters k and N for the WCAN method, using the Volume 2.1 from the MUW-2 dataset and the reference frames from Volume R.2. (a) FOM for consecutive k values from 0.3 to 1.5. (b) Improvement in the FOM metric for consecutive k values from 0.3 to 1.5. The horizontal axis represents consecutive pairs of values for k. The vertical axis represents the FOM differences over all frames in the volume for consecutive pairs of values for k. (c) FOM for consecutive N values from 2 to 6. (d) Improvement in FOM metric for consecutive N values from N=2 to N=6. The horizontal axis represents consecutive pairs of values for N. The vertical axis represents the FOM differences over all frames in the volume for consecutives pairs of values for N. (e) Examples of the application of the WCAN method for different combinations of k and N in the ROI marked as a white box in the top-left raw noisy OCT image.

JBO_26_6_065001_f006.png

To find appropriate values of k and N for the quantitative assessment, we used the first volume in each dataset (Volume IDs 1.1, 2.1, 3.1, and 4.1 for the MUW-1, MUW-2, MUW-3, and A2ASDOCT datasets, respectively, in Table 2 and the references frames with Volumes IDs R.1, R.2, R3, and R.4 I in Table 3). Table 5 gives the final values chosen for each dataset, having been determined empirically as a suitable balance between performance metric improvement, qualitative visual quality, and execution time.

Table 5

Parameters used in the assessment of the WCAN method for each dataset. N is the number of frames considered at consecutive positions in the compounding process. k is the adjustment parameter for the thresholding process. The number of decomposition levels, L, was set to four for all datasets.

ParameterMUW-1 datasetMUW-2 datasetMUW-3 datasetA2ASDOCT dataset
N3324
k111.11.6

We also evaluated the influence of the wavelet family in the calculation of 2D discrete stationary wavelet transform. We tested the wavelets families Haar, Daubechies (db1-db10), Symlets (sym2-sym8), Coiflets (coif1-coif5), and BiorSplines (bior1.1, bior1.3, bior1.5, bior2.2, bior2.4, bior2.6, bior2.8, bior3.1, bior3.3, bior3.5, bior3.7, bior3.9, bior4.4, bior 5.5, and bior6.8) using the Volume ID 2.1 and the reference set R.2 of Table 3. The Table 6 presents the results of the wavelet with the best performance in each family. As we can observe, the Haar wavelet shows the best performance and was the wavelet selected for the rest of the experiments.

Table 6

Quantitative evaluation of the wavelet families Haar, Daubenchies, Symlets, Coiflets and BiorSplines, considering the Volume ID 2.1 and the reference set R.2 from Table 3. The results show the mean ± standard deviation of the improvement with respect to the initial metrics presented in Table S1 in the Supplementary Material (raw images).

WaveletSNRCNRENL
haar12.30 ± 2.442.02 ± 0.36622.80 ± 101.53
db211.78 ± 2.251.86 ± 0.31537.78 ± 82.06
sym311.07 ± 2.031.85 ± 0.32571.68 ± 79.25
coef111.74 ± 2.231.85 ± 0.31533.41 ± 81.91
bior3.311.71 ± 2.211.82 ± 0.30521.72 ± 81.37

3.2.

Quantitative Evaluation

We now present the results of our quantitative evaluation of the application of the compared methods to the complete stack of frames for all four datasets in this study. In the evaluation, we used the same ROIs for all frames in the same volume to maintain a consistent reference (Fig. 7 shows one frame with the ROIs used for Volume 1.1). The detailed values of the metrics for all datasets are included in Tables S2–S7 in the Supplementary Material. In Tables 7Table 8Table 910 and Fig. 8, we present the aggregate results. Tables 7Table 8Table 910 show the mean ± standard deviation of the improvement with respect to the initial metrics presented in Table S1 in the Supplementary Material (raw images). The values corresponding to the best performance for the individual metrics and for the FOMs are shown in bold.

Fig. 7

OCT raw image and ROIs used for the calculation of the quality metrics. The white rectangular region was used for the noise estimation; the red rectangles represent the homogeneous regions (H=3) used to calculate ENL, and the green rectangles represent the nonhomogeneous regions. The sum of both individual metrics is used to calculate the CNR (R=6).

JBO_26_6_065001_f007.png

Table 7

Quantitative evaluation for the MUW-1 dataset, considering the results of the application of all filters to Volumes 1.1–1.5 (see Table 2) and the reference frames R.1 (see Table 3). The compounding was performed using three B-scans per position.

FilterSNR NormCNR NormENL NormFOMSUMFOMMIN
WCAN0.99 ± 0.0260.97 ± 0.0650.91 ± 0.1172.87 ± 0.1920.91 ± 0.117
TNODE0.88 ± 0.0440.93 ± 0.0570.95 ± 0.1012.77 ± 0.1480.86 ± 0.075
PNLM0.75 ± 0.1020.80 ± 0.0670.61 ± 0.1682.16 ± 0.2710.61 ± 0.165
NAWT0.87 ± 0.0930.89 ±0.0830.82 ±0.1262.58 ± 0.2620.80 ± 0.119
WVMF0.82 ± 0.0510.78 ± 0.0640.68 ± 0.1012.28 ± 0.1660.68 ± 0.101
KSVD0.80 ± 0.0850.73 ± 0.0640.64 ± 0.1052.18 ± 0.1690.64 ± 0.101
DNCNN0.50 ± 0.0510.54 ± 0.0590.44 ± 0.0771.48 ± 0.1370.42 ± 0.066

Table 8

Quantitative evaluation for the MUW-2 dataset, considering the results of the application of all filters to Volumes 2.1–2.5 (see Table 2) and the reference frames R.2 (see Table 3). The compounding was performed using three B-scans per position.

FilterSNR NormCNR NormENL NormFOMSUMFOMMIN
WCAN0.97 ± 0.0640.98 ± 0.0550.96 ± 0.0752.91 ± 0.1870.95 ± 0.076
TNODE0.98 ± 0.0260.92 ± 0.0550.85 ± 0.1002.75 ± 0.1650.85 ± 0.099
PNLM0.78 ± 0.0750.87 ± 0.0550.31 ± 0.0621.96 ± 0.1420.31 ± 0.062
NAWT0.68 ± 0.0760.74 ± 0.1160.69 ± 0.1782.11 ± 0.3350.62 ± 0.114
WVMF0.72 ± 0.0530.66 ± 0.0570.52 ± 0.0861.89 ± 0.1530.52 ± 0.086
KSVD0.78 ± 0.0830.64 ± 0.0800.45 ± 0.0851.87 ± 0.2130.45 ± 0.085
DNCNN0.32 ± 0.0480.29 ± 0.0480.18 ± 0.0520.79 ± 0.1210.18 ± 0.052

Table 9

Quantitative evaluation for the MUW-3 dataset, considering the results of the application of all filters to Volumes 3.1–3.8 (see Table 2) and the reference frames R.3 (see Table 3). The compounding was performed using two B-scans per position.

FilterSNR NormCNR NormENL NormFOMSUMFOMMIN
WCAN0.85 ± 0.1021.00 ± 0.0270.99 ± 0.0402.84 ± 0.1120.84 ± 0.097
TNODE0.80 ± 0.0630.90 ± 0.0700.87 ± 0.1032.56 ± 0.1440.77 ± 0.045
PNLM0.87 ± 0.0550.84 ± 0.0710.74 ± 0.1042.46 ± 0.1500.31 ± 0.052
NAWT0.58 ± 0.1160.72 ± 0.0950.64 ± 0.1281.95 ± 0.2670.55 ± 0.092
WVMF0.61 ± 0.0660.75 ± 0.1390.68 ± 0.1942.05 ± 0.3130.51 ± 0.068
KSVD0.98 ± 0.0490.81 ± 0.1120.69 ± 0.1562.48 ± 0.3020.45 ± 0.077
DNCNN0.25 ± 0.0400.44 ± 0.1150.37 ± 0.1381.07 ± 0.2540.18 ± 0.041

Table 10

Quantitative evaluation for the A2ASDOCT dataset, considering the results of the application of all filters to the 17 retinal sets (see Table 2) and the reference frame R.4 (see Table 3). The compounding was performed using four B-scans per position.

FilterSNR NormCNR NormENL NormFOMSUMFOMMIN
WCAN0.93 ± 0.0401.00 ± 0.0090.96 ± 0.0652.89 ± 0.0730.91 ± 0.057
TNODE0.99 ± 0.0280.93 ± 0.0240.90 ± 0.0672.82 ± 0.0980.89 ± 0.061
PNLM0.99 ± 0.0090.97 ± 0.0260.90 ± 0.0912.86 ± 0.1120.89 ± 0.083
NAWT0.60 ± 0.0740.55 ± 0.0600.35 ± 0.1071.50 ± 0.1710.35 ± 0.099
WVMF0.81 ± 0.0550.88 ± 0.0270.70 ± 0.1392.39 ± 0.1550.67 ± 0.122
KSVD0.67 ± 0.0460.60 ± 0.0370.38 ± 0.0961.65 ± 0.1350.38 ± 0.096
DNCNN0.43 ± 0.0440.38 ± 0.0260.18 ± 0.0581.00 ± 0.0820.18 ± 0.059

Fig. 8

Value distribution for the different metrics of the quantitative evaluation of all datasets used in the study (see Table 2). The proposed WCAN method is shown in the rightmost column within the red box. (a) FOMSUM calculated using Eq. (14). (b) FOMMIN calculated using Eq. (15). (c) SNR Norm calculated using Eq. (11) and normalized with respect to the maximum value of the metric for each denoised frame. (d) CNR Norm calculated using Eq. (12) and normalized with respect to the maximum value of the metric for each denoised frame. (e) ENL Norm calculated using Eq. (13) and normalized with respect to the maximum value of the metric for each denoised frame. (f) Order of the datasets for each method.

JBO_26_6_065001_f008.png

To facilitate comparison between metrics, we used the normalized value of each metric, calculated from the maximum values for all filtered images in each group of frames [see Fig. 4(b)]. We can then refer to metric values corresponding to the best filtered output for each group of frames to make fair comparisons across the volume. We should first note that the best results across all datasets were accomplished by WCAN, with the highest values for FOMSUM and standard deviations between the smallest (MUW-3) and the second smallest (MUW-1, MUW-2, and A2ASDOCT). We have the same situation with the metric FOMMIN, where WCAN presents the highest values. The TNODE algorithm had the second-best performance across all datasets. WCAN was also the best option, in most cases, when considering each metric separately. It returned the best CNR and ENL results for all datasets (except for MUW-1, where it had the second highest ENL). WCAN had the best SNR score for MUW-1, was second for MUW-2, and third for MUW-3 and A2ASDOCT.

Another notable aspect of the WCAN results is its consistent performance with very different OCT image settings, image samples, and locations on the skin (see Table 2) and retina. As is clear from Figs. 8(a) and 8(b), other filters such as PNLM and KSVD perform very differently for the retina dataset (A2ASDOCT) than for the skin datasets (MUW-1, MUW-2, and MUW-3).

Figure 9 shows the significant differences in the OCT images generated by applying the state-of-the-art methods to Volume 1.1 and the reference frames R.1. Videos S1S4 (see Figs. 1316 in the Appendix) show the results of the application of the method over frames of the datasets MUW-1, MUW-2, MUW-3, and A2SDOCT.

Fig. 9

Comparison of the application of speckle reduction methods in a frame with 1100×512  pixels from Volume 1.1 using three B-scans at consecutive positions and the reference frames R.1. (a) Raw OCT image with a detailed ROI. The remainder of the image is the filtered OCT image after applying the method. (b) Proposed WCAN method. (c) TNODE method. (d) PNLM method. (e) NAWT method. (f) WVMF method. (g) KSVD method. (h) DNCNN method.

JBO_26_6_065001_f009.png

Simple averaging was also evaluated. We used three volumes from datasets MUW-1, MUW-2, and MUW-3, where we have enough frames to perform the experiment in each volume. In particular, we considered Volumes ID 1.1, 2.1, 3.1 (Table 2) and the reference sets R.1, R.2, and R.3 of Table 3, respectively. We applied the WCAN method to each volume using the parameters previously identified in Table 5. In particular, we applied the WCAN to the three first frames of Volume ID 1.1 and 2.1 with k=1 and to the first two frames of Volume ID 3.1 with k=1.1. Then, we performed the average of N frames iteratively until the metric FOMSUM of averaging frames improved the metric of WCAN previously calculated. The results showed that to improve WCAN we needed to average 12, 27, and 13 frames for the Volumes ID 1.1, 2.1, and 3.1, respectively.

Figure 10 shows an analysis of some examples of the WCAN algorithm’s performance with skin images corresponding to frames of the MUW-2 and MUW-3 datasets. The first sample (Volume 2.4) provides information about a superficial BCC on the head. In Figs. 10(a) and 10(b), speckle reduction using WCAN [Fig. 10(b)] enables a stronger identification of the skin layers (marked with digits 1–3) than does the raw image [Fig. 10(a)]. In addition, the stratum corneum layer and the epidermis–dermis interface are more clearly distinguished. The boundaries and contrast are also enhanced, shown by comparing the area enclosed by the blue ellipse and the red arrows in the zoom-in boxes in the raw image [Fig. 10(a)] and after filtering [Fig. 10(b)]. The second example (Volume 3.8) shows a cheek with folliculitis. Figures 10(c) and 10(d) show a sharper difference between the regions indicated by the blue line and the red arrows and the structure indicated by the green arrow in the zoom-in boxes.

Fig. 10

Comparison of the application of the WCAN method to frames from the MUW-2 and MUW-3 datasets. (a) Raw OCT skin image from a 1000×680  pixel sample of a head affected by BCC (Volume 2.4). Two detail ROIs are identified by white rectangular boxes. The digits 1, 2, and 3 in the image identify the stratum corneum, epidermis, and dermis layer, respectively. The blue ellipse and red arrows indicate regions that show the contrast enhancement obtained by filtering the raw image. (b) WCAN image processed using three consecutive frames from Volume 2.4, starting from the a frame. (c) Raw OCT skin image from a 1038×460  pixel sample of a cheek with folliculitis (Volume 3.8). Two zoom-in ROIs are identified by white rectangular boxes. The blue line and the red and green arrows identify different areas affected by speckle reduction after application of the proposed method. (d) WCAN image processed using two consecutive frames from Volume 3.8 starting from the c frame.

JBO_26_6_065001_f010.png

Figure 11 enables qualitative comparisons for a retinal image from the A2ASDOCT database after applying each of the denoising methods in the study. Figure 11 shows that the WCAN algorithm gives significant noise suppression, which enables clear identification of the boundaries between retinal layers.

Fig. 11

Comparison of the application of speckle reduction methods in a 900×450-pixel frame from Volume 4.1 using four B-scans at consecutive positions. (a) Raw OCT image and a detail ROI used for the qualitative comparison of all methods. (b) Proposed WCAN method. (c) TNODE method. (d) PNLM method. (e) KSVD method. (f) WVMF method.

JBO_26_6_065001_f011.png

Finally, to assess the computation time for the various algorithms we applied all methods of the study to 20 scaled versions of the first two frames of Volume ID 1.1. For the WCAN method we used k=1 and the reference frames R.1 from Table 3. The original size of the volume was two frames with 1100×512  pixels. We used scales from 0.1 to 4 in steps of 0.2 (20 scales in total) to resize the frames. For each scale we applied all of the methods and calculated the execution time using the method timeit of Matlab. In Fig. 12 we present the results. As shown in Fig. 12(a) the application of the WCAN method requires less execution time than the other pure compounding alternatives such as TNODE and WVMF. The PNLM and NAWT methods presented the best performance among all of the algorithms tested [Fig. 12(b)]. Finally, to estimate the computational complexity of the algorithm, we applied a basic fitting of the performance metrics of WCAN, resulting a linear polynomial function of the type y=p1*x+p2 with p1=0.0087, p2=0.0086, and the norm of residuals equaling 0.699. So we can estimate that the time complexity of the method is linear. The tests were executed in Matlab 2018b (MathWorks, Inc.) on a personal computer (Intel 3.3 GHz CPU, 32 GB memory).

Fig. 12

Comparison of the computation time in seconds of all methods of the study (WCAN, TNODE, WVMF, PNLM, NAWT, KSVD, and DNCNN). The parameter N* represents the total number of the pixels considered in each test (1100×512×2×scale) normalized with respect to the smallest one (1100×512×2×0.1). (a) Computation time comparison of WCAN with respect to the compounding methods TNODE and WVMF. (b) Computation time comparison of WCAN with respect to PNLM, NAWT, KSVD, and DNCNN methods.

JBO_26_6_065001_f012.png

4.

Conclusion

This paper has described a speckle reduction method for OCT volumes. The method is based on a multiscale adaptive noise-wavelet-compounding strategy. The method requires the estimation of the noise variance in the wavelet domain of the OCT setting. In this study, we used previously acquired images of homogeneous scattering examples. The noise variance is used to compute the weights for the detail coefficients of each subband of each frame in the volume before compounding all of the filtered frames.

One of the major highlights of our method, and a novel contribution in the field of wavelet denoising, is the introduction of a new compounding scheme that integrates the noise variance influence of the input frames in relation to the variance of a set of reference frames. With this new approach, a visual inspection of the denoised frames presented shows the potential of the proposed algorithm to reduce the speckle noise efficiently without generating relevant denoising artifacts or producing a degradation of the structures present in the image. The simplicity, the ability of the method to adapt to different environments (OCT settings and noise levels), the robustness above all metrics considered, and its reduced computational time are its most significant advantages. Another potential benefit is the capacity of the method to adjust the amount of noise reduction by the modification of the parameter k, which may facilitate the adaptation to different clinical needs. The main weaknesses of the method is the need for a prior acquisition of a homogeneous sample to determine the noise variance used in the adaptive compounding of the detail coefficients in the wavelet decomposition. Nevertheless, as described in the estimation of the noise in datasets MUW-2 and A2ASDOCT, this drawback can be avoided through the use of a homogeneous ROI of the input frames. The other line of improvement is the automatic assessment of the parameters of the method (k and the number of frames of the input and reference frames). However, this assessment is needed only once when fine tuning the algorithm in a particular OCT setting.

A thorough evaluation was performed by applying the method to four different OCT settings, i.e., around 18 skin volumes in three datasets and 17 retinal volumes in a fourth dataset. We compared our method’s results with those from six state-of-the-art algorithms. The results of a quantitative evaluation based on five different metrics demonstrated that the proposed method achieved the best performance among the tested techniques in suppressing noise. A qualitative visual comparison of images confirmed the relative performance of the proposed method and suggests the potential application of the proposed OCT denoising technique to improving image quality prior to clinical diagnosis based on visual assessment. Future work will involve an automatic process for the assessment of the parameters of the method (k and the number of frames of the input and reference frames) through the training of a machine learning model based on quantitative metrics. Furthermore, a formal qualitative evaluation will be performed by clinical experts tackling the specific needs in their clinical routine, such as the capacity of the method to preserve or enhance the identification of tissue layers and their borders and the ability to visualize structures or to identify vasculatures.

5.

Appendix

Videos S1S4 (see Figs. 1316) show the results of the application of the method over frames of the datasets MUW-1, MUW-2, MUW-3, and A2SDOCT.

Fig. 13

Results of the application of the WCAN method over frames from dataset MUW-1. Volumes 1.1–1.5 (see Table 2). using three B-scans at consecutive positions. (a) OCT raw image Volume 1.2 (b) WCAN image.(Video S1, MP4, 8.3MB [URL: https://doi.org/10.1117/1.JBO.26.6.065001.1]).

JBO_26_6_065001_f013.png

Fig. 14

Results of the application of the WCAN method over frames from dataset MUW-2, Volumes 2.1–2.5 (see Table 2), using three B-scans at consecutive positions. (a) OCT raw image Volume 2.5 (b) WCAN image. (Video S2, MP4, 10.6 MB [URL: https://doi.org/10.1117/1.JBO.26.6.065001.2]).

JBO_26_6_065001_f014.png

Fig. 15

Results of the application of the WCAN method over frames from dataset MUW-3, Volumes 3.1–3.5 (see Table2), using two B-scans at consecutive positions. (a) OCT raw image Volume 3.3 (b) WCAN image. (Video S3, MP4, 1.8 MB [URL: https://doi.org/10.1117/1.JBO.26.6.065001.3]).

JBO_26_6_065001_f015.png

Fig. 16

Results of the application of the WCAN method over frames from dataset A2ASDOCT, Volumes 4.1–4.17 (see Table 2), using four B-scans at consecutive positions. (a) OCT raw image Volume 4.12 (b) WCAN image. (Video S4, MP4, 9.9 MB [URL: https://doi.org/10.1117/1.JBO.26.6.065001.4]).

JBO_26_6_065001_f016.png

Disclosures

The authors have no relevant financial interests in the paper and no other potential conflicts of interest to disclose.

Acknowledgments

This work was partly supported by the European Union FP7 project BiopsyPen (Grant No. 611132), the European Regional Development Fund, and the Spanish Ministry of Science, Innovation and Universities (Grant Nos. TEC2015-66978-R and RTI2018-098682- B-I00 MCIU/AEI/FEDER, EU).

Code, Data, and Materials Availability

We have made a Matlab implementation of the WCAN method available through Code Ocean, including the full code for the noise estimation and the denoising algorithm, together with raw volume data for testing purposes.

References

1. 

D. Huang et al., “Optical coherence tomography,” Science, 254 (5035), 1178 –1181 (1991). https://doi.org/10.1126/science.1957169 SCIEAS 0036-8075 Google Scholar

2. 

G. J. Jaffe and J. Caprioli, “Optical coherence tomography to detect and manage retinal disease and glaucoma,” Am. J. Ophthalmol., 137 (1), 156 –169 (2004). https://doi.org/10.1016/S0002-9394(03)00792-X AJOPAA 0002-9394 Google Scholar

3. 

H. G. Bezerra et al., “Intracoronary optical coherence tomography: a comprehensive review: clinical and research applications,” JACC: Cardiovasc. Interventions, 2 (11), 1035 –1046 (2009). https://doi.org/10.1016/j.jcin.2009.06.019 Google Scholar

4. 

T. Gambichler, A. Pljakic and L. Schmitz, “Recent advances in clinical application of optical coherence tomography of human skin,” Clin. Cosmet. Investig. Dermatol., 8 345 –354 (2015). https://doi.org/10.2147/CCID.S69119 Google Scholar

5. 

J. Olsen, J. Holmes and G. B. E. Jemec, “Advances in optical coherence tomography in dermatology—a review,” J. Biomed. Opt., 23 (4), 040901 (2018). https://doi.org/10.1117/1.JBO.23.4.040901 JBOPFO 1083-3668 Google Scholar

6. 

A. A. Hussain, L. Themstrup and G. B. E. Jemec, “Optical coherence tomography in the diagnosis of basal cell carcinoma,” Arch. Dermatol. Res., 307 (1), 1 –10 (2015). https://doi.org/10.1007/s00403-014-1498-y Google Scholar

7. 

S. A. Alawi et al., “Optical coherence tomography for presurgical margin assessment of non-melanoma skin cancer—a practical approach,” Exp. Dermatol., 22 (8), 547 –551 (2013). https://doi.org/10.1111/exd.12196 EXDEEY 0906-6705 Google Scholar

8. 

O. Markowitz et al., “Evaluation of optical coherence tomography as a means of identifying earlier stage basal cell carcinomas while reducing the use of diagnostic biopsy,” J. Clin. Aesthet. Dermatol., 8 (10), 14 –20 (2015). Google Scholar

9. 

J. M. Schmitt, S. H. Xiang and K. M. Yung, “Speckle in optical coherence tomography,” J. Biomed. Opt, 4 (1), 95 –105 (1999). https://doi.org/10.1117/1.429925 JBOPFO 1083-3668 Google Scholar

10. 

M. Bashkansky and J. Reintjes, “Statistics and reduction of speckle in optical coherence tomography,” Opt. Lett., 25 (8), 545 –547 (2000). https://doi.org/10.1364/OL.25.000545 OPLEDP 0146-9592 Google Scholar

11. 

P. Lee, W. Gao and X. Zhang, “Speckle properties of the logarithmically transformed signal in optical coherence tomography,” J. Opt. Soc. Am. A, 28 (4), 517 –522 (2011). https://doi.org/10.1364/JOSAA.28.000517 JOAOD6 0740-3232 Google Scholar

12. 

M. Pircher et al., “Speckle reduction in optical coherence tomography by frequency compounding,” J. Biomed. Opt., 8 (3), 565 –570 (2003). https://doi.org/10.1117/1.1578087 JBOPFO 1083-3668 Google Scholar

13. 

A. E. Desjardins et al., “Speckle reduction in OCT using massively-parallel detection and frequency-domain ranging,” Opt. Express, 14 (11), 4736 –4745 (2006). https://doi.org/10.1364/OE.14.004736 OPEXFF 1094-4087 Google Scholar

14. 

M. Szkulmowski et al., “Efficient reduction of speckle noise in optical coherence tomography,” Opt. Express, 20 (2), 1337 –1359 (2012). https://doi.org/10.1364/OE.20.001337 OPEXFF 1094-4087 Google Scholar

15. 

L. Duan et al., “Single-shot speckle noise reduction by interleaved optical coherence tomography,” J. Biomed. Opt., 19 (12), 120501 (2014). https://doi.org/10.1117/1.JBO.19.12.120501 JBOPFO 1083-3668 Google Scholar

16. 

B. Sander et al., “Enhanced optical coherence tomography imaging by multiple scan averaging,” Br. J. Ophthalmol., 89 (2), 207 –212 (2005). https://doi.org/10.1136/bjo.2004.045989 BJOPAL 0007-1161 Google Scholar

17. 

Y. Winetraub et al., “Upper limit for angular compounding speckle reduction,” Appl. Phys. Lett., 114 (21), 211101 (2019). https://doi.org/10.1063/1.5088709 APPLAB 0003-6951 Google Scholar

18. 

J. Zhao et al., “Angular compounding for speckle reduction in optical coherence tomography using geometric image registration algorithm and digital focusing,” Sci. Rep., 10 (1), 1893 (2020). https://doi.org/10.1038/s41598-020-58454-0 SRCEC3 2045-2322 Google Scholar

19. 

M. Pircher et al., “Measurement and imaging of water concentration in human cornea with differential absorption optical coherence tomography,” Opt. Express, 11 (18), 2190 –2197 (2003). https://doi.org/10.1364/OE.11.002190 OPEXFF 1094-4087 Google Scholar

20. 

J. Rogowska and M. E. Brezinski, “Evaluation of the adaptive speckle suppression filter for coronary optical coherence tomography imaging,” IEEE Trans. Med. Imaging, 19 (12), 1261 –1266 (2000). https://doi.org/10.1109/42.897820 ITMID4 0278-0062 Google Scholar

21. 

D. L. Marks, T. S. Ralston and S. A. Boppart, “Speckle reduction by I-divergence regularization in optical coherence tomography,” J. Opt. Soc. Am. A, 22 (11), 2366 –2371 (2005). https://doi.org/10.1364/JOSAA.22.002366 JOAOD6 0740-3232 Google Scholar

22. 

D. C. Fernández, H. M. Salinas and C. A. Puliafito, “Automated detection of retinal layer structures on optical coherence tomography images,” Opt. Express, 13 (25), 10200 –10216 (2005). https://doi.org/10.1364/OPEX.13.010200 OPEXFF 1094-4087 Google Scholar

23. 

Z. Jian et al., “Three-dimensional speckle suppression in optical coherence tomography based on the curvelet transform,” Opt. Express, 18 (2), 1024 –1032 (2010). https://doi.org/10.1364/OE.18.001024 OPEXFF 1094-4087 Google Scholar

24. 

L. Fang et al., “Sparsity based denoising of spectral domain optical coherence tomography images,” Biomed. Opt. Express, 3 (5), 927 –942 (2012). https://doi.org/10.1364/BOE.3.000927 BOEICL 2156-7085 Google Scholar

25. 

H. Chen et al., “Speckle attenuation by adaptive singular value shrinking with generalized likelihood matching in optical coherence tomography,” J. Biomed. Opt., 23 (3), 036014 (2018). https://doi.org/10.1117/1.JBO.23.3.036014 JBOPFO 1083-3668 Google Scholar

26. 

P. G. Daneshmand, A. Mehridehnavi and H. Rabbani, “Reconstruction of optical coherence tomography images using mixed low rank approximation and second order tensor based total variation method,” IEEE Trans. Med. Imaging, 40 (3), 865 –878 (2021). https://doi.org/10.1109/TMI.2020.3040270 Google Scholar

27. 

Z. Amini, H. Rabbani and I. Selesnick, “Sparse domain gaussianization for multi-variate statistical modeling of retinal OCT images,” IEEE Trans. Image Process., 29 6873 –6884 (2020). https://doi.org/10.1109/TIP.2020.2994454 IIPRE4 1057-7149 Google Scholar

28. 

S. Adabi et al., “A learnable despeckling framework for optical coherence tomography images,” J. Biomed. Opt., 23 016013 (2018). https://doi.org/10.1117/1.JBO.23.1.016013 Google Scholar

29. 

M. H. Eybposh et al., “Cluster-based filtering framework for speckle reduction in OCT images,” Biomed. Opt. Express, 9 (12), 6359 –6373 (2018). https://doi.org/10.1364/BOE.9.006359 BOEICL 2156-7085 Google Scholar

30. 

M. Gargesha et al., “Denoising and 4D visualization of OCT images,” Opt. Express, 16 (16), 12313 –12333 (2008). https://doi.org/10.1364/OE.16.012313 OPEXFF 1094-4087 Google Scholar

31. 

B. Chong and Y.-K. Zhu, “Speckle reduction in optical coherence tomography images of human finger skin by wavelet modified BM3D filter,” Opt. Commun., 291 (Suppl. C), 461 –469 (2013). https://doi.org/10.1016/j.optcom.2012.10.053 OPCOB8 0030-4018 Google Scholar

32. 

H. Yu, J. Gao and A. Li, “Probability-based non-local means filter for speckle noise suppression in optical coherence tomography images,” Opt. Lett., 41 (5), 994 –997 (2016). https://doi.org/10.1364/OL.41.000994 OPLEDP 0146-9592 Google Scholar

33. 

J. Aum, J. Kim and J. Jeong, “Effective speckle noise suppression in optical coherence tomography images using nonlocal means denoising filter with double Gaussian anisotropic kernels,” Appl. Opt., 54 (13), D43 –D50 (2015). https://doi.org/10.1364/AO.54.000D43 APOPAI 0003-6935 Google Scholar

34. 

K. Zhang et al., “Beyond a Gaussian denoiser: residual learning of deep CNN for image denoising,” IEEE Trans. Image Process., 26 (7), 3142 –3155 (2017). https://doi.org/10.1109/TIP.2017.2662206 IIPRE4 1057-7149 Google Scholar

35. 

N. Cai et al., “A ResNet-based universal method for speckle reduction in optical coherence tomography images,” in Proc. IEEE Int. Symp. Biomed. Imaging (ISBI), (2018). Google Scholar

36. 

Y. Ma et al., “Speckle noise reduction in optical coherence tomography images based on edge-sensitive cGAN,” Biomed. Opt. Express, 9 (11), 5129 –5146 (2018). https://doi.org/10.1364/BOE.9.005129 BOEICL 2156-7085 Google Scholar

37. 

Y. Huang et al., “Simultaneous denoising and super-resolution of optical coherence tomography images based on generative adversarial network,” Opt. Express, 27 (9), 12289 –12307 (2019). https://doi.org/10.1364/OE.27.012289 OPEXFF 1094-4087 Google Scholar

38. 

B. Qiu et al., “Noise reduction in optical coherence tomography images using a deep neural network with perceptually-sensitive loss function,” Biomed. Opt. Express, 11 (2), 817 –830 (2020). https://doi.org/10.1364/BOE.379551 BOEICL 2156-7085 Google Scholar

39. 

N. A. Kande et al., “SiameseGAN: a generative model for denoising of spectral domain optical coherence tomography images,” IEEE Trans. Med. Imaging, 40 (1), 180 –192 (2021). https://doi.org/10.1109/TMI.2020.3024097 ITMID4 0278-0062 Google Scholar

40. 

J.-L. Starck, F. D. Murtagh and A. Bijaoui, Image Processing and Data Analysis: The Multiscale Approach, Cambridge University Press(1998). Google Scholar

41. 

D. C. Adler, T. H. Ko and J. G. Fujimoto, “Speckle reduction in optical coherence tomography images by use of a spatially adaptive wavelet filter,” Opt. Lett., 29 (24), 2878 –2880 (2004). https://doi.org/10.1364/OL.29.002878 OPLEDP 0146-9592 Google Scholar

42. 

S. Chitchian, M. A. Fiddy and N. M. Fried, “Denoising during optical coherence tomography of the prostate nerves via wavelet shrinkage using dual-tree complex wavelet transform,” J. Biomed. Opt., 14 (1), 014031 (2009). https://doi.org/10.1117/1.3081543 JBOPFO 1083-3668 Google Scholar

43. 

F. Zaki et al., “Noise adaptive wavelet thresholding for speckle noise removal in optical coherence tomography,” Biomed. Opt. Express, 8 (5), 2720 –2731 (2017). https://doi.org/10.1364/BOE.8.002720 BOEICL 2156-7085 Google Scholar

44. 

A. Ozcan et al., “Speckle reduction in optical coherence tomography images using digital filtering,” J. Opt. Soc. Am. A, 24 (7), 1901 –1910 (2007). https://doi.org/10.1364/JOSAA.24.001901 JOAOD6 0740-3232 Google Scholar

45. 

M. A. Mayer et al., “Wavelet denoising of multiframe optical coherence tomography data,” Biomed. Opt. Express, 3 (3), 572 –589 (2012). https://doi.org/10.1364/BOE.3.000572 BOEICL 2156-7085 Google Scholar

46. 

R. Kafieh, H. Rabbani and I. Selesnick, “Three dimensional data-driven multi scale atomic representation of optical coherence tomography,” IEEE Trans. Med. Imaging, 34 (5), 1042 –1062 (2015). https://doi.org/10.1109/TMI.2014.2374354 ITMID4 0278-0062 Google Scholar

47. 

C. Cuartas-Vélez et al., “Volumetric non-local-means based speckle reduction for optical coherence tomography,” Biomed. Opt. Express, 9 (7), 3354 –3372 (2018). https://doi.org/10.1364/BOE.9.003354 BOEICL 2156-7085 Google Scholar

48. 

J. M. Schmitt and A. Knüttel, “Model of optical coherence tomography of heterogeneous tissue,” J. Opt. Soc. Am. A, 14 (6), 1231 –1242 (1997). https://doi.org/10.1364/JOSAA.14.001231 JOAOD6 0740-3232 Google Scholar

49. 

X. Wang et al., “A two-step iteration mechanism for speckle reduction in optical coherence tomography,” Biomed. Signal Process. Control, 43 86 –95 (2018). https://doi.org/10.1016/j.bspc.2018.02.011 Google Scholar

50. 

J. Duan et al., “Denoising optical coherence tomography using second order total generalized variation decomposition,” Biomed. Signal Process. Control, 24 120 –127 (2016). https://doi.org/10.1016/j.bspc.2015.09.012 Google Scholar

51. 

J. Xu et al., “Wavelet domain compounding for speckle reduction in optical coherence tomography,” J. Biomed. Opt., 18 (9), 096002 (2013). https://doi.org/10.1117/1.JBO.18.9.096002 JBOPFO 1083-3668 Google Scholar

52. 

X. Li, S. Liang and J. Zhang, “Acceleration of OCT signal processing with lookup table method for logarithmic transformation,” Appl. Sci., 9 (7), 1278 (2019). https://doi.org/10.3390/app9071278 Google Scholar

53. 

P. S. Hiremath, P. T. Akkasaligar and S. Badiger, “Speckle noise reduction in medical ultrasound images,” Advancements and Breakthroughs in Ultrasound Imaging, IntechOpen(2013). Google Scholar

54. 

S. Gai et al., “Speckle noise reduction in medical ultrasound image using monogenic wavelet and Laplace mixture distribution,” Digital Signal Process., 72 192 –207 (2018). https://doi.org/10.1016/j.dsp.2017.10.006 DSPREJ 1051-2004 Google Scholar

55. 

Z. Chen et al., “Phase-stable swept source OCT angiography in human skin using an akinetic source,” Biomed. Opt. Express, 7 (8), 3032 –3048 (2016). https://doi.org/10.1364/BOE.7.003032 BOEICL 2156-7085 Google Scholar

56. 

Z. Chen et al., “Non-invasive multimodal optical coherence and photoacoustic tomography for human skin imaging,” Sci. Rep., 7 (1), 17975 (2017). https://doi.org/10.1038/s41598-017-18331-9 SRCEC3 2045-2322 Google Scholar

57. 

A. Aneesh et al., “Multispectral in vivo three-dimensional optical coherence tomography of human skin,” J. Biomed. Opt., 15 (2), 026025 (2010). https://doi.org/10.1117/1.3400665 JBOPFO 1083-3668 Google Scholar

58. 

A. A. Khanifar and S. Farsiu, “Spectral-domain OCT in practice,” (2008) https://retinatoday.com/articles/2008-may/0508_10-php Google Scholar

59. 

“AREDS 2 Ancillary Spectral Domain Optical Coherence Tomography Study (A2ASDOCT),” https://clinicaltrials.gov/ct2/show/NCT00734487 Google Scholar

Biography

Juan J. Gómez-Valverde is an assistant professor at the Universidad Politécnica de Madrid (UPM) and a researcher in the Biomedical Research Networking Center in bioengineering, biomaterials, and nanomedicine. He graduated in telecommunication engineering and completed a master’s program in biomedical engineering at the UPM. He obtained his PhD with international mention from UPM. His main research interests are related to screening for diseases, biomedical image processing, artificial intelligence, and optical image analysis.

Zhe Chen received his PhD from the Medical University of Vienna in 2020. His research area includes dual modality optical coherence tomography angiography (OCTA) and photoacoustics tomography (PAT) imaging techniques. He is currently working in optical and architecture design of laser beam scanners used in the augmented reality area.

Andrés Santos received his MEng degree and PhD from UPM where he is currently a professor and the director of the Biomedical Image Technologies lab. His main research areas are biomedical image analysis and acquisition. He has coauthored over 90 papers in indexed journals on medical imaging and has coordinated more than 50 research projects and technology transfer contracts. He has been co-director of a master’s program on biomedical technology and instrumentation.

Wolfgang Drexler is a full professor and since 2009 head of the Center for Medical Physics and Biomedical Engineering after having been full professor of biomedical imaging at Cardiff University, UK, from 2006 to 2009. He also spent two years at the MIT, Cambridge, from 1998 to 1999. He received the Austrian START Award in 2001 and the COGAN Award in 2007. His h-index is 72 (Scopus) and his research grant income since 2000 is €17 million.

María J. Ledesma-Carbayo received her MEng degree in telecommunication engineering and her PhD from UPM specializing in medical image analysis. She additionally completed two master’s programs in biomedical engineering and medical physics. She is currently a full professor at UPM. She has authored over 150 publications in indexed journals and conferences. Her research interests include cardiac and pulmonary imaging, image-guided therapy, microscopy image analysis, registration, motion estimation, and compensation.

Biographies of the other authors are not available.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Juan Jose Gomez-Valverde, Christoph Sinz, Elisabet A. Rank, Zhe Chen, Andrés Santos, Wolfgang Drexler, and Maria Jesus Ledesma-Carbayo "Adaptive compounding speckle-noise-reduction filter for optical coherence tomography images," Journal of Biomedical Optics 26(6), 065001 (17 June 2021). https://doi.org/10.1117/1.JBO.26.6.065001
Received: 17 February 2021; Accepted: 24 May 2021; Published: 17 June 2021
Advertisement
Advertisement
KEYWORDS
Optical coherence tomography

Wavelets

Digital filtering

Speckle

Signal to noise ratio

Denoising

Image filtering

Back to Top