Next Article in Journal
Moving Accelerometers to the Tip: Monitoring of Wind Turbine Blade Bending Using 3D Accelerometers and Model-Based Bending Shapes
Next Article in Special Issue
Spectral Color Management in Virtual Reality Scenes
Previous Article in Journal
Dynamic Models Supporting Personalised Chronic Disease Management through Healthcare Sensors with Interactive Process Mining
Previous Article in Special Issue
Spectral Reflectance Reconstruction Using Fuzzy Logic System Training: Color Science Application
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Surgical Guidance for Removal of Cholesteatoma Using a Multispectral 3D-Endoscope †

1
Department of Computer Vision and Graphics, Fraunhofer Heinrich-Hertz-Institute, 10587 Berlin, Germany
2
Department of Visual Computing, Humboldt Universität zu Berlin, 10117 Berlin, Germany
3
Department of Otorhinolaryngology, Charité-Universitätsmedizin Berlin, 10117 Berlin, Germany
*
Author to whom correspondence should be addressed.
This paper is an extended version of our paper published in: Wisotzky, E.L.; Rosenthal, J.C.; Hilsmann, A.; Eisert, P.; Uecker, F.C. A multispectral 3D-Endoscope for Cholesteatoma Removal. In Proceedings of the 54th DGBMT Annual Conference on Biomedical Engineering-BMT 2020, 29 September–1 October 2020.
These authors contributed equally to this work.
Sensors 2020, 20(18), 5334; https://doi.org/10.3390/s20185334
Submission received: 10 August 2020 / Revised: 13 September 2020 / Accepted: 14 September 2020 / Published: 17 September 2020
(This article belongs to the Special Issue Color & Spectral Sensors)

Abstract

:
We develop a stereo-multispectral endoscopic prototype in which a filter-wheel is used for surgical guidance to remove cholesteatoma tissue in the middle ear. Cholesteatoma is a destructive proliferating tissue. The only treatment for this disease is surgery. Removal is a very demanding task, even for experienced surgeons. It is very difficult to distinguish between bone and cholesteatoma. In addition, it can even reoccur if not all tissue particles of the cholesteatoma are removed, which leads to undesirable follow-up operations. Therefore, we propose an image-based method that combines multispectral tissue classification and 3D reconstruction to identify all parts of the removed tissue and determine their metric dimensions intraoperatively. The designed multispectral filter-wheel 3D-endoscope prototype can switch between narrow-band spectral and broad-band white illumination, which is technically evaluated in terms of optical system properties. Further, it is tested and evaluated on three patients. The wavelengths 400 nm and 420 nm are identified as most suitable for the differentiation task. The stereoscopic image acquisition allows accurate 3D surface reconstruction of the enhanced image information. The first results are promising, as the cholesteatoma can be easily highlighted, correctly identified, and visualized as a true-to-scale 3D model showing the patient-specific anatomy.

1. Introduction

Cholesteatoma is a disease of the middle ear. It consists of sprawling squamous epithelium. It is not cancerous, but cholesteatoma can lead to life-threatening complications due to its destructive growth. Therefore, the growth needs to be treated by surgery, which is currently the only possible treatment. In addition, cholesteatoma requires a complete resection of affected tissue to avoid a recurrence. Especially, the proliferation of this epithelium in the middle ear cavity and further growth into the mastoid and lateral skull base can lead to life-threatening complications. The damage of adjacent structures, as the ossicular chain and the cochlea, can cause hearing loss and lead to deafness. Furthermore, facial paralysis might be caused by erosion of the facial canal and disturbances of the equilibrium organ due to detraction of the semicircular fistulas. Fortified inflammation causes mastoiditis, meningitis, intracranial abscesses, and sinus vein thrombosis [1,2,3,4,5,6,7,8].
Surgery as the only treatment has the objective of the complete removal of cholesteatoma while recovering or preserving the hearing ability. In order to avoid revision surgery due to non-removed/non-identified cholesteatoma tissue, the complete removal of cholesteatoma is superordinate to the other objectives, such as the preservation of hearing ability. In cases of residual and recurrent cholesteatoma, a revision surgery is strictly needed [9,10,11,12]. All existing techniques of extensive cholesteatoma surgery include a form of mastoidectomy. The removal of the external auditory canal is considered to be the most effective procedure to allow complete cholesteatoma removal. However, the preservation or restoration of the hearing ability at the same time requires a high level of surgical expertise [5,8,13,14,15].
Digitization creates new possibilities to support the surgeon in such complex surgical procedures. On the one hand, digital stereoscopic image acquisition allows three-dimensional (3D) reconstruction of the situs. Based on 3D reconstruction results, contactless and radiation-free metric measurements of the anatomical structures can be performed [16]. In addition, spectral imaging can be used to detect optical tissue properties that are normally indistinguishable to the human eye [17]. All this additional knowledge has the potential to support the surgeon in his/her decision making and facilitate the surgical procedure when appropriate intraoperative real-time visualizations are used. Thus, these new image-based methods have the potential to accelerate complex surgical procedures and avoid re-surgery for an improved patient outcome [18].
In contrast to computer tomography (CT) and magnetic resonance imaging (MRI) methods, radiation-free image-based 3D reconstruction enables a continuous creation of 3D models of the patient-specific anatomy. In general, this 3D data model the exterior surface information of the analyzed volume and can be visualized as a point cloud or as a mesh representation. 3D measurements in combination with the spectral documentation of the cholesteatoma dimensions allow a true-scale comparison to preoperative CT data [16,19,20].
Multispectral analysis is established in biomedicine for cell and skin analysis [21,22]. Important contributions for clinical and surgical applications using multispectral techniques were presented in [23,24]. However, multispectral analysis has not yet been used widely in intraoperative medical therapy for accurate image-guided tissue differentiation. Different techniques exist to acquire the spectral information of the present tissues. Hyperspectral line scanning or multispectral snapshot cameras can be used, where the spectral separation is achieved by filtering the light that reaches the imaging sensor [25,26]. To perform the spectral separation on the illumination side, e.g., filter-wheels can be used [27]. Since only a few specific wavelength bands are chosen for this study, the filter-wheel technique was selected.
The aim of this project is to build a multispectral 3D-endoscopic setup that allows the intraoperative analysis of cholesteatoma. The analysis results will be augmented to endoscopic image data with further information, easing the process of tissue differentiation and supporting the surgical decision-making process during the treatment. The possibility of sequentially switching between broad-band and narrow-band illumination opens up new possibilities with regard to intraoperative tissue-specific visualizations. In this study, the cholesteatoma in the middle ear of three patients are analyzed to sketch the feasibility of the outlined stereo-spectral imaging system.

2. Materials and Methods

2.1. Imaging Setup

We used a 3D laparoscope as our endoscopic imaging system (Schölly Fiberoptic GmbH, Denzlingen, Germany). The 3D-endoscope is a chip-in-scope system with an RGB Bayer pattern CMOS sensor. The sensor and camera specifications are listed in Table 1. The CMOS sensor used has a sensitivity range reaching from 380 to 1100 nm, and the focusing lens has an estimated focal length of 4.63 mm. The focus point is at 48 mm, which defines the optimal working distance (WD) to the surgical area. The external Xenon (Xe) light source holds a UV-filter to cut off the radiation below 350 nm, as well as an IR filter cutting the radiation above 700 nm; see Figure 1a. A filter-wheel is placed between the surgical scene and the Xe light source [28]. Thus, it becomes possible to select specific wavelength bands in the visible spectral range, from 400 nm to 500 nm in steps of 20 nm, by turning the filter-wheel; see Figure 2. The principle of the specific filter-wheel setup used in this study was described in Wisotzky et al. [17,29]. Each filter has a bandwidth in terms of a full-width half-maximum of about 20 ± 3 nm. The light is guided through a fluid light-guide with optimal light transmission in the range of 340 nm to 800 nm, as the luminous efficiency is low due to the narrow-band filters, especially for 400 nm to 440 nm; cf. Figure 1b. The usage of a filter-wheel restricts the whole system, as real-time multispectral capturing is not possible and small camera movements can be present between the narrow-band images. We consider the current system setup as a proof-of-concept, as the aim of this work is to show the ability of differentiating cholesteatoma from surrounding tissue and combining this multispectral differentiation with volumetric situs data. A practical system would use a multi-spectral snap shot sensor with appropriate bands or synchronized lighting with different wavelengths.
During stereo-spectral image acquisition, it is essential to avoid indirect scattered broad-band illumination. Thus, other light sources need to be switched off during the acquisitions. However, due to the endoscopic design, the influence of scattered light during data acquisition can be minimized very effectively as the mastoid cavity is a very enclosed anatomical structure. The situs is alternately illuminated using the filter-wheel with the broad-band mode (light spectrum of Figure 1a) and narrow-band mode (light spectra of the six filters shown in Figure 1b). The surgical scene is captured with visible light (cf. Figure 1a) using the stereoscopic RGB sensor of the endoscope. This results in an image sequence of seven images of the same surgical view, one standard RGB image and six images holding only the reflectance of specific spectral illuminations in the range of 400–500 nm.
Furthermore, we extend the multispectral processing unit with a 3D reconstruction module to perform image-based measurements of the surgical scene. The combination of both methods enables us to compute a true-scale spectral 3D model of the patients’ anatomy, so that image-based volume and size measurements can be performed and registered to preoperative CT data. Nonetheless, a precondition to perform such image-based measurements and 3D reconstruction is a calibrated stereoscopic system. This includes the determination of lens distortion parameters, intrinsic parameters such as focal length and the principal point, as well as the estimation of the extrinsic parameters, i.e., the orientation of the two stereo cameras to each other and the inter-axial distance between both cameras. It is important to note that the working distance affects the accuracy of both techniques, 3D reconstruction and spectral tissue classification. Therefore, it is crucial to have an accurately calibrated endoscopic imaging system that compensates for distance dependent effects. Especially, stereo calibration is of high importance for photogrammetric applications and stereo image processing algorithms as they are highly coupled. In general, our camera calibration technique makes use of the so-called checkerboard pattern [30], which has been intensively discussed in the computer vision community for many applications including medical scenarios [31,32]. However, our calibration method differs from the well-known checkerboard methods in that we apply a model-based approach using image registration techniques to correlate captured patterns with the reference target plane [33].
Our 3D reconstruction pipeline consists of three steps: First, we apply a scene dependent rectification of image pairs via the detection of robust feature points [34]. Second, we perform a dense disparity estimation [35] using such rectified image pairs as the input. Disparity estimation fully works in parallel on a graphic card with CUDA support. The real-time procedure makes use of a statistical approach on the sub-pixel level to estimate new correspondences. These correspondences are distributed into the local neighborhood where new correspondences are determined within the next iteration. This independent propagation of new estimated correspondences guarantees that the whole image is constantly updated. The obtained dense disparity maps have sub-pixel accurate disparity values. Lastly, the corresponding points are 3D reconstructed from the sub-pixel positions using triangulation. Thus, we measure dimensions directly within the image to further support the decision-making process.
In Wisotzky et al. [29], a spectral calibration process was developed that uses monochromatic sensor data in combination with a filter-wheel setup. In this work, the approach is changed to RGB sensor data, where each color channel is used and calibrated independently. The analysis of the RGB data can be simplified as we only consider sensitive wavelengths within the spectral range of λ = 400 nm to λ = 500 nm. Therefore, we can omit the red and green channel during spectral illumination because no notable sensitivity is given. The red channel becomes sensitive at λ 550 nm, and the green channel becomes sensitive at λ 480 nm.
Thus, the main analysis is done only on the blue channel of the sensor, and all spectral bands are considered using the illumination λ within the defined spectral range. A similar approach was proposed in Wisotzky et al. [18] using a fully digital surgical microscope. In this study, the system is used to acquire all stereo-spectral information by rotating the filter-wheel. Then, the analysis of the data is done postoperatively. To avoid imaging errors induced by the filter-wheel, caused by movements or additional aberrational effects, the filter-wheel is stopped at each filter position, and the specific wavelength image is acquired [17,29]. This process does not allow real-time analysis, which is not the aim of this study, as we want to identify specific wavelengths usable for the application of cholesteatoma removal.
Despite the importance of the described calibration for robust and efficient surgical guidance, the optical lens properties of the system used are extremely relevant to understand captured image information and allow a detailed and correct analysis of the multispectral stereoscopic image data. This work investigates the introduced system in terms of its modulated transfer function (MTF) and different limitations of the optical system. The MTF is an industry standard to measure the imaging performance of the optical systems. Therefore, we use more than six spectral filters in the filter-wheel and extend the wavelength range up to λ = 660 nm (eight additional filters) to get an improved understanding of the underlying optical properties of the whole system. Important limitations of an optical system, which are discussed in this study, are chromatic aberration and lens distortion. Both effects have strong influences on the measuring accuracy if the optical system is not calibrated in terms of spectral effects.

2.2. Surgical Cases and Tissue Specification

The patients analyzed in this study were a 58 year-old male and a 19 year-old and an 11 year-old female. The first two patients had a long history of recurrent cholesteatoma and consecutive surgical interventions (among others, canal wall down technique). The third patient presented cholesteatoma for the first time. All patients were introduced to us with a fetid runny ear, conductive hearing loss, and otalgia. Locally, both recurrent cholesteatoma patients had a large, moist radical cavity with insufficient drainage, and high facial spur was reduced in the two patients with the canal wall down technique.
Intraoperatively, the cholesteatoma was confirmed microscopically with an expansion in the attic. Due to the migration theory and biologically destructive growth [36,37,38], as well as the history of multiple ear surgeries, the anatomy showed a broad osseous destruction with a missing ossicular chain and a partially removed ear canal wall including an infected mastoid cavity; see Figure 3. The recurrent cholesteatoma expanded probably from a residuum of the reconstructed tympanic membrane in a broad fashion over the round window niche, the intact stapes footplate in the attic and lateral to the facial nerve into the mastoid cavity; see Figure 4. Interestingly, the entire labyrinth block stayed inviolated. The cholesteatoma could be resected for all three patients and the high facial spur reduced. The radical cave was reconstructed by means of a concha cartilage graft, temporalis fascia, and a local flap plastic. The sound conduction could be rehabilitated by inserting a total ossicular replacement prosthesis (TORP) in the middle ear.
The written informed consent of acquiring stereo-spectral endoscopic data during the standard procedure was provided by all patients. The whole study was in agreement with the ethical approval obtained from the Ethics Committee of the Charité-Universitätsmedizin Berlin under Approval Code EA4/036/19.

2.3. Visualization

Multispectral imaging enables a measurement of spectral information for a complete scene if the camera has been accurately characterized and calibrated. The measured wavelengths λ are used to calculate spectral information for every captured surface area identifying interesting tissue types. In this specific case, the tissues-of-interest (TOI) are bone and cholesteatoma. Both tissue types appear white under normal broad-band illumination and RGB visualization, making a distinct tissue differentiation difficult.
Based on the knowledge from the spectrophotometer analysis of fresh tissue samples of the major present tissues (cholesteatoma and bone) [39], we can identify the best wavelength bands to amplify and highlight cholesteatoma tissue during the surgical treatment. It has been reported that the main difference of these two tissue types in terms of optical properties lies within the spectral range of λ = 400 nm to λ = 575 nm. In that spectral range, cholesteatoma shows a much higher reflectance than bone. Theoretically, cholesteatoma would appear much brighter compared to bone under illumination with λ = 400 nm to λ = 575 nm. According to that analysis and to avoid the spectral influences of blood, having peaks in the range of λ = 555 nm [40], the described six appropriate spectral bands between λ = 400 nm to λ = 500 nm were selected. As further tissue types, e.g., connective tissues, can be present in the surgical area, the spectral behavior of these tissues has to be considered as well. According to different studies [41,42,43], other possible present tissue types show lower reflectance in the interval of λ = 400 nm to λ = 575 nm compared to cholesteatoma. In addition, these other tissue types do not appear white under normal light conditions and are easier to differentiate. Consequently, we want to visualize the additional relevant tissue information of the spectral range of λ = 400 nm to λ = 575 nm directly into the endoscopic image as an augmented overlay. Therefore, we propose the following basic concept for spatial color enhancement to highlight the identified cholesteatoma regions and maintain the original color balance and dynamic range. First, we remove endoscopic noise, which influences the spectral image quality for chosen wavelengths λ . We achieve this by transforming the spectral image into the frequency domain by applying a Fourier transformation. In the frequency domain, we do a simple low-pass filter to remove high frequency noise patterns. Second, we preserve the dynamic range, as well as the color balance in the reconstructed spectral RGB image by adding identified spectral tissue information to one channel only. As surgical images hold most of its information in the R (red) and G (green) channels, the additional spectral information are added to the B (blue) channel. This suits the circumstance that only the calibrated B channel information of the sensor response is used for the spectral analysis, as described in Section 2.1. After calibration, the intended straight-forward augmented overlay is achieved using:
R r e s G r e s B r e s = R b b / max ( R b b ) G b b / max ( G b b ) ( B b b / max ( B b b ) ) + B c a l n b ,
where R b b , G b b , B b b are the RGB sensor responses using broad-band ( b b ) illumination and B c a l n b represents the calibrated and modified B channel holding the relevant tissue information using narrow-band ( n b ) illumination:
B c a l n b ( x ) = y = B λ n b B λ c o r if y t 0 if y < t
with B λ c o r the correction image for the specific λ illumination and t a scene and image-specific threshold suppressing small reflectance responses based on the intensity distribution.
For intraoperative visualization, it is important to select only specific information that will be useful for the surgeon. In this work, we analyze the six spectral images to find the best wavelength band to differentiate between cholesteatoma and bone using the above augmented overlay strategy. Further, specular reflections can occur through surgical instruments and fluid accumulation (e.g., blood) on tissue areas and lead to unwanted sensor saturation. As specular reflections occur in every spectral image, these areas may be enhanced as well and appear bluish. For the surgeon, this may not be a problem, as instruments and fluid accumulations are easy to identify, but specular reflections can be detected and neglected during the visualization step. Specular image surfaces show a very low color saturation S and a very high color intensity I, while they are hue independent. Therefore, pixels can be easily classified as specular reflectance pixels using an effective threshold method:
I = R + G + B 3 S = 3 2 ( R I ) if ( B + R ) 2 G 3 2 ( I B ) if ( B + R ) < 2 G ,
where a pixel has to exceed the threshold of intensity I = 0.8 and fall below the threshold of saturation S = 0.1 . Identified specular reflectance pixels are ignored during the visualization process.

3. Results

For each patient, two stereo-multispectral endoscopic videos are recorded showing the lateral skull base during the cholesteatoma removal procedure. The scanned situs of Patient 1 is shown in Figure 4. All captured data contain several filter-wheel sequences resulting in a set of twelve sequences in total. Figure 5 shows a complete sequence of six spectral images between 400 and 500 nm. As the sensor sensitivity is relatively small in the range of λ = 380 nm to λ 420 nm, the images with the filters of λ = 400 nm and λ = 420 nm hold less intensity than all other images, resulting in a fairly dark image, especially for λ = 400 nm, and a very small signal-to-noise ratio (SNR).

3.1. Optical Properties of the Endoscopic System

The working distance of our stereo-multispectral system is adjusted on the normal broad-band illuminated RGB image to focus on anatomical structures with the highest interest; cf. Figure 4. In the spectral sequence, distortions and aberrations occur, especially for deep blue wavelength illumination in the range of λ = 400 nm to 440 nm. The most prominent effect of this results in strongly blurred blueish spectral images. The blurring effect in these spectral images has two reasons: (1) undesired endoscopic camera movements and/or (2) various optical imaging errors. Undesired small movements may be present in the image as the surgeon is holding the endoscopic camera head, which can cause irregular small camera shakes and lead to misaligned spectral images in the sequence. Further, spectral images acquired with a filter-wheel setup are affected by different wavelength dependent imaging errors like chromatic aberrations, lens distortion, and light dispersion. In this work, all optical properties of the system, as well as all optical imaging errors are analyzed and discussed independently of each other. In this study, imaging errors are not induced by the movement of the filter wheel, because the filter wheel was stopped at each spectral measurement and remained in a static position. However, all effects influence each other and should be corrected in one λ dependent calibration step to achieve optimal image quality, but this process is beyond the scope of this paper.

3.1.1. Modulated Transfer Function

The degrees of sharpness (DSs) were analyzed using the type 7A9 sharpness indicator by Putora [44]. As shown in Figure 6, all circles are clearly visible for normal white light illumination (a), as well as illumination with λ = 600 nm (b), indicating a DS of 108.9 lp/mm, while illuminating the pattern using the blue spectrum, e.g., with λ = 460 nm (c), only gives 62.3 lp/mm. Medical images show mostly reddish image content; thus, the endoscopic optical system is optimized for λ in the range of around λ 600 nm. For shorter blueish wavelengths, the lens design gets more complicated regardless of how narrow the spectral band used is due to glass materials tending to not perform well at shorter wavelengths [45].
The DS analysis is a first valuable indicator of the λ -dependence of the MTF. The MTF was analyzed using the Koren 2003 line test chart [46] and shows different behaviors for the imaging system using different wavelengths λ . The average 50 % and 10 % MTF frequencies for the blue spectrum of 400 nm to 460 nm are 38.75 lp/mm and 61.82 lp/mm, respectively, while for the range of 480 nm to 660 nm, the average 50 % and 10 % MTF frequencies are 49.0 lp/mm and 98.7 lp/mm. The 50 % and 10 % MTF frequencies of each specific filter can be found in Table 2. These results indicate that the images using blue illumination are not suitable to be used solely for visualization, but can be used as a source of specific spectral information only. Further, the reduced image quality makes image correction for the blue channels a major challenge.

3.1.2. Chromatic Aberration

The 3D-endoscope has a fixed focus. The λ dependent refraction index of a lens causes chromatic aberrations. It includes transversal components [47,48], i.e., along the image plane, as well as longitudinal components [49,50], i.e., along the optical axis. This causes a chromatic focus shift for different wavelengths λ in relation to different working distances, i.e., the distance to the lens. Both aberrations, transversal and longitudinal, result in blurred spectral channels, which can be compared to a convolution applying a low-pass filter. The displacement of the focus plane is larger for small λ (e.g., λ = 400 nm) than for greater λ (e.g., λ = 500 nm), which corresponds to the image impressions of Figure 5 where the first two images (a) and (b) appear sharp, while the images (d) to (f), λ = 440 nm to λ = 400 nm, are blurry. This is caused by the lenses, which are optimized for sharp images between λ 480 nm and λ 650 nm. Accordingly, the focal shift increases in adjacent spectral regions.

3.1.3. Lens Distortion

Endoscopic imaging systems have a complex lens design due to the limited space and safety considerations in highly constrained environments. Therefore, endoscopes often come with wave distortions. This is a mixture of both simple distortion types, barrel and pincushion distortion, and can be caused by system designs to find a trade-off by trying to minimize distortion and maximize the field of view. This type of distortion shows a smaller deviation from the rectilinear projection, but is λ dependent [51]. The distortion increases with smaller λ . A calibration that removes that distortion requires special consideration of the different wavelengths. In our setup, the lens and stereo calibration is performed in the broad-band mode, and λ dependency is neglected for distortion correction, as depicted in Figure 7. Such distortion correction can be performed for each individual spectral channel, but the low image quality of the blue λ channels (cf. Section 3.1.1) does not allow this properly. Nonetheless, this shows the need to introduce a combined multispectral stereo/lens calibration to increase accuracy by incorporating the λ dependency.

3.1.4. Penetration Depth of Light

The penetration depth of light rays is also λ dependent [52]. The higher λ is, the deeper the penetration into the tissue, which leads to higher scattering and depth aberration. Nonetheless, even depth aberration is λ dependent [53]. Depth aberrations arise from the refractive index mismatch between different tissue layers and types. They increase with decreasing λ . This might reduce the spatial image resolution and cannot be described and compensated analytically, as it depends on the geometry of the tissues and surfaces.

3.2. Cholesteatoma Visualization

All six images in Figure 5 show a similar behavior for cholesteatoma and bone. As expected, cholesteatoma is much brighter compared to bone. The images with λ = 420 nm and λ = 400 nm show the lowest intensity levels. The areas with captured intensity above the noise level correspond to identified cholesteatoma tissue. Due to this fact, we select λ = 400 nm and λ = 420 nm for further augmented visualization. However, it is also possible to use the narrow-band illumination options with the other wavelengths λ . Nonetheless, it is necessary to separate the regions of cholesteatoma (with higher intensity) and regions of bones (with lower intensity) with suitable computer vision methods, e.g., thresholding as one of the simplest methods. Further, specular reflections are present for higher λ , which makes a robust correction of this effect important.
The B channel of the captured blue filter image is corrected using dark and white image information and then normalized. A resulting reflectance image is shown in Figure 8a. This image only holds information corresponding to cholesteatoma. This information is added to the B channel of the next recorded broad-band RGB image (Figure 8b) using Equation (1). The resulting image is shown in Figure 8c. In this enhanced view, it is easy to differentiate between different structures that have been white under normal broad-band illumination as some structures become bluish and others remain white. The bluish structures correspond to cholesteatoma, while the white structures are bone. This behavior is consistent over all analyzed data; see Figure 9. Thus, this enhanced view allows good surgical guidance to differentiate easily between these two tissue types while maintaining the decision process of the surgeon.
All marked regions identified by the surgeon as cholesteatoma in the broad-band images (cf. Figure 4) before and after spectral acquisition correspond to bluish structures in the improved RGB visualization. To confirm the diagnosis of the surgeon, all identified and removed tissue samples are analyzed using normal histological examination.
Specular reflectance can occur through the parts of the retractor, as well as fluid accumulation, reaching sensor saturation and leading to clipping. Therefore, local saturation correction can be necessary and omit over-saturated sensor areas. Figure 10a depicts such over-saturated regions. Accordingly, Figure 10b shows the clear benefit in adjusting saturation locally by omitting misleading information.

3.3. Multispectral Stereo Acquisition

Besides the single advantages and benefits of stereoscopic and multispectral imaging, the combination of both results in a new valuable visualization of intraoperative data. The system design allows the 3D model generation of the patients’ anatomy, highlighting and identifying malicious tissue regions, and comparison to preoperative CT data. For such intraoperative volume analysis, the 3D reconstruction of the exterior surface measurements requires high accuracy. Evaluations using specimens with known dimensions give empirical accuracies of approximately 1/10 mm for several hundred point-to-point measurements; see Figure 11. Further, it has been shown in the context of visceral surgery that the image-based 3D reconstruction method is real-time capable and has one of the lowest error rates compared to similar approaches [54]. Figure 12 shows the stereoscopic input highlighting identified cholesteatoma for the left and right view, respectively. For both views, the relevant tissue could be consistently identified.
Then, we use the stereo-spectral image pair in Figure 12 for spectral 3D reconstruction to obtain a dense spectral 3D point cloud. The generated point clouds are very dense with a high level of anatomical detail and consist of approximately 1.8 million vertices. Figure 13 shows dense point clouds with different viewing angles for each patient of our study group. We can use this information for two interesting surgical diagnosis/treatment patterns, as the point cloud now contains the combined spectral and spatial information of the patient. First, we can highlight cholesteatoma areas in 3D and mask out non-relevant tissue areas, allowing a better understanding of the patients’ cholesteatoma size and shape. Second, tissue can be tracked and monitored in terms of how much volume has been removed from the middle ear anatomy during a procedure, e.g., the ear canal wall. For further surgical assessment, related preoperative CT segmentation results (Figure 14) can then be used in comparison to intraoperative 3D measurements.

4. Discussion

Besides the investigation of the spectral and physical sensor characteristic, we presented first options for surgical visualization adding multispectral information into RGB images without concealing relevant anatomical structures and features. In addition, the overall image and color impression is preserved, which allows the surgeon to intuitively understand the visually annotated surgical image showing the multispectral information. Moreover, strong local specular reflectance can be corrected to discard misleading information. However, this work presents a proof-of-concept for the surveillance of cholesteatoma removal with a focus on system analysis and selection of optimal wavelengths and less on perfect correction of spectral reflections.
As shown in the analyses of the optical properties, the image quality is strongly λ dependent. A focus adaption in combination with a λ dependent stereo calibration would allow a joint consideration of all lambda dependent and independent effects like distortion or aberration. This would then result in sharp images for all spectral illumination options and channels. Furthermore, we used the filter-wheel in this study only for analysis purposes, as it has some disadvantages, such as the lack of real-time capability [17]. Therefore, we suggest an adapted setup using a synchronized light source with white light and different blue flashes for clinical usage. Equally, the usage of a continuously running filter-wheel containing only the identified λ = 400 nm and λ = 420 nm, as well as empty spots for broad-band illumination seems feasible. In both cases, i.e., blue flashes and the filter-wheel, the blue images will not be present in the video stream independently and have to be processed to augment an overlay, as proposed in this work.
Multispectral tissue differentiation has become intensively studied and analyzed using machine learning methods as these can catch high-dimensional tissue behavior [55,56,57]. In this study, knowledge about the spectral behavior of cholesteatoma and bone, as well as only a few spectral bands are used. Therefore, differentiation using machine leaning seems not necessary, as this would need a large number of training data and would be computationally intensive compared to the usage of the discussed blue flashes.
Commercially, Karl Storz and Diaspective Vision have introduced an integrated multispectral endoscope [58]. Nonetheless, the system is still under development. Beside this new interesting approach, a common way to visualize specific anatomical structures is fluoroscopy, which depends on the insertion of indocyanine green (ICG) into the blood as a bio marker. Another commercially available method is narrow-band imaging, where differences between the blood oxygenation can be visualized [59]. Both methods can only highlight areas with sufficient blood flow. An enriched visualization of customizable structures (e.g. bone, fat tissue) is not possible with that approach, and therefore, a comparison seems difficult. Our presented method has the potential to visualize important tissue structures without introducing chemical agents into the patient. However, it seems feasible that commercially systems can achieve similar effects if it is possible to select similar wavelength bands and visualization options.
The combination of spectral visualization and 3D reconstruction opens up new possibilities for the integration of intraoperative and preoperative data, e.g., the comparison of the size, the shape, or removed tissue volumes. In conclusion, this work shows very promising results by means of stereo-spectral endoscopic-based tissue analysis using a sophisticated filter-wheel setup. It seems feasible that such a multimodal system can be integrated into a constrained environment like the operation room and open up new opportunities for intraoperative surgical assistance and image-guided interventions.

Author Contributions

Conceptualization: E.L.W., J.-C.R., and F.C.U.; software: J.-C.R. and E.L.W.; validation: E.L.W., J.-C.R., and F.C.U.; investigation: F.C.U.; data curation: E.L.W., J.-C.R., and U.W.; writing, original draft: E.L.W., J.-C.R., and F.C.U.; writing, review and editing: A.H. and P.E.; visualization: E.L.W., J.-C.R., and U.W.; supervision: A.H., P.E., and F.C.U.; project administration: E.L.W., J.-C.R., A.H., and P.E.; funding acquisition: E.L.W. and J.-C.R. All authors read and agreed to the published version of the manuscript.

Funding

This research was funded by the German Federal Ministry of Education and Research (BMBF) under Grant Number 16SV8061.

Acknowledgments

The authors would like to thank Schölly Fiberoptic GmbH, Germany, for providing the 3D laparoscope, as well as ImFusion GmbH, Munich, Germany, for providing the software ImFusion Suite for medical CT image analysis.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; nor in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
RGBRed-green-blue color model
CTComputed tomography
MTFModulated transfer function
SNRSignal-to-noise ratio
TOITissue-of-interest
UVUltra violet
IRInfrared
WDWorking distance
XeXenon lamp
CMOSComplementary metal oxide semiconductor
TORPTotal ossicular replacement prosthesis

References

  1. Uecker, F.C. Hals-Nasen-Ohren-Heilkunde in Frage und Antwort: Fragen und Fallgeschichten zur Vorbereitung auf Mündliche Prüfungen Während des Semesters und Examen; Elsevier, Urban & Fischer Verlag: München, Germany, 2006. [Google Scholar]
  2. Dornelles, C.; Costa, S.S.d.; Meurer, L.; Schweiger, C. Some considerations about acquired adult and pediatric cholesteatomas. Rev. Bras. Otorrinolaringologia 2005, 71, 536–546. [Google Scholar] [CrossRef] [Green Version]
  3. Fassett, D.R.; Kan, P.; Chin, S.S.; Couldwell, W.T. Cholesteatoma of the clivus. Skull Base 2006, 16, 45–47. [Google Scholar] [CrossRef] [PubMed]
  4. Sudhoff, H.; Hildmann, H. Gegenwärtige Theorien zur Cholesteatomentstehung. HNO 2003, 51, 71–83. [Google Scholar] [CrossRef] [PubMed]
  5. Gilberto, N.; Custódio, S.; Colaço, T.; Santos, R.; Sousa, P.; Escada, P. Middle ear congenital cholesteatoma: Systematic review, meta-analysis and insights on its pathogenesis. Eur. Arch. Oto-Rhino-Laryngol. 2020, 277, 987–998. [Google Scholar] [CrossRef] [PubMed]
  6. Keeler, J.A.; Kaylie, D.M. Cholesteatoma: Is a second stage necessary? Laryngoscope 2016, 126, 1499–1500. [Google Scholar] [CrossRef]
  7. Gioacchini, F.M.; Cassandro, E.; Alicandri-Ciufelli, M.; Kaleci, S.; Cassandro, C.; Scarpa, A.; Re, M. Surgical outcomes in the treatment of temporal bone cerebrospinal fluid leak: A systematic review. Auris Nasus Larynx 2018, 45, 903–910. [Google Scholar] [CrossRef]
  8. Prasad, S.; La Melia, C.; Medina, M.; Vincenti, V.; Bacciu, A.; Bacciu, S.; Pasanisi, E. Long-term surgical and functional outcomes of the intact canal wall technique for middle ear cholesteatoma in the paediatric population. Acta Otorhinolaryngol. Ital. 2014, 34, 354. [Google Scholar]
  9. Olszewska, E.; Wagner, M.; Bernal-Sprekelsen, M.; Ebmeyer, J.; Dazert, S.; Hildmann, H.; Sudhoff, H. Etiopathogenesis of cholesteatoma. Eur. Arch. Oto-Rhino-Laryngol. Head Neck 2004, 261, 6–24. [Google Scholar] [CrossRef]
  10. Russo, C.; Elefante, A.; Cavaliere, M.; Di Lullo, A.M.; Motta, G.; Iengo, M.; Brunetti, A. Apparent diffusion coefficients for predicting primary cholesteatoma risk of recurrence after surgical clearance. Eur. J. Radiol. 2020, 125, 108915. [Google Scholar] [CrossRef]
  11. Black, B.; Gutteridge, I. The prevention of recurrent cholesteatoma in CWU surgery: The use of titanium sheeting. Otol. Neurotol. 2017, 38, 1290–1295. [Google Scholar] [CrossRef]
  12. Kazahaya, K.; Potsic, W.P. Congenital cholesteatoma. Curr. Opin. Otolaryngol. Head Neck Surg. 2004, 12, 398–403. [Google Scholar] [CrossRef] [PubMed]
  13. Blanco, P.; González, F.; Holguín, J.; Guerra, C. Surgical management of middle ear cholesteatoma and reconstruction at the same time. Colomb. Med. 2014, 45, 127–131. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Smith, J.A.; Danner, C.J. Complications of chronic otitis media and cholesteatoma. Otolaryngol. Clin. N. Am. 2006, 39, 1237–1255. [Google Scholar] [CrossRef] [PubMed]
  15. Reddy, R.; Echanique, K.; Jyung, R. Iatrogenic skull base cholesteatoma. Ear Nose Throat J. 2018, 97, E41–E42. [Google Scholar] [PubMed]
  16. Gard, N.; Rosenthal, J.C.; Jurk, S.; Schneider, A.; Eisert, P. Image-based measurement by instrument tip tracking for tympanoplasty using digital surgical microscopy. In Proceedings of the Medical Imaging 2019: Image-Guided Procedures, Robotic Interventions, and Modeling, San Diego, CA, USA, 17–19 February 2019; pp. 318–328. [Google Scholar] [CrossRef]
  17. Wisotzky, E.L.; Uecker, F.C.; Arens, P.; Dommerich, S.; Hilsmann, A.; Eisert, P. Intraoperative hyperspectral determination of human tissue properties. J. Biomed. Opt. 2018, 23, 91409. [Google Scholar] [CrossRef] [Green Version]
  18. Wisotzky, E.L.; Rosenthal, J.C.; Eisert, P.; Hilsmann, A.; Schmid, F.; Bauer, M.; Schneider, A.; Uecker, F.C. Interactive and Multimodal-based Augmented Reality for Remote Assistance using a Digital Surgical Microscope. In Proceedings of the IEEE VR 2019, The 26th IEEE Conference on Virtual Reality and 3D User Interfaces, Osaka, Japan, 23–27 March 2019; pp. 1477–1484. [Google Scholar] [CrossRef]
  19. Kumar, A.; Wang, Y.-Y.; Wu, C.-J.; Liu, K.-C.; Wu, H.-S. Stereoscopic laparoscopy using depth information from 3D model. In Proceedings of the 2014 IEEE International Symposium on Bioelectronics and Bioinformatics (IEEE ISBB 2014), Chung Li, Taiwan, 11–14 April 2014; pp. 1–4. [Google Scholar]
  20. Augmented reality navigation for liver resection with a stereoscopic laparoscope. Comput. Methods Programs Biomed. 2020, 187, 105099. [CrossRef]
  21. Lu, G.; Fei, B. Medical hyperspectral imaging: A review. J. Biomed. Opt. 2014, 19, 10901. [Google Scholar] [CrossRef]
  22. Salomatina, E.V.; Jiang, B.; Novak, J.; Yaroslavsky, A.N. Optical properties of normal and cancerous human skin in the visible and near-infrared spectral range. J. Biomed. Opt. 2006, 11, 64026. [Google Scholar] [CrossRef]
  23. Zuzak, K.J.; Francis, R.P.; Wehner, E.F.; Smith, J.; Litorja, M.; Allen, D.W.; Tracy, C.; Cadeddu, J.; Livingston, E. Hyperspectral imaging utilizing LCTF and DLP technology for surgical and clinical applications. In Proceedings of the Design and Quality for Biomedical Technologies II, San Jose, CA, USA, 26 January 2009; International Society for Optics and Photonics: Bellingham, WA, USA; p. 71700C. [Google Scholar]
  24. Holmer, A.; Marotz, J.; Wahl, P.; Dau, M.; Kämmerer, P.W. Hyperspectral imaging in perfusion and wound diagnostics–methods and algorithms for the determination of tissue parameters. Biomed. Eng./Biomed. Tech. 2018, 63, 547–556. [Google Scholar] [CrossRef]
  25. Luthman, A.S.; Dumitru, S.; Quiros-Gonzalez, I.; Joseph, J.; Bohndiek, S.E. Fluorescence hyperspectral imaging (fHSI) using a spectrally resolved detector array. J. Biophotonics 2017, 10, 840–853. [Google Scholar] [CrossRef]
  26. Geelen, B.; Blanch, C.; Gonzalez, P.; Tack, N.; Lambrechts, A. A tiny VIS-NIR snapshot multispectral camera. In Proceedings of the Advanced Fabrication Technologies for Micro/Nano Optics Photonics VIII, San Francisco, CA, USA, 7–12 February 2015; p. 937414. [Google Scholar]
  27. Garini, Y.; Young, I.T.; McNamara, G. Spectral Imaging: Principles and Applications. Cytometry Part A 2006, 69, 735–747. [Google Scholar] [CrossRef] [PubMed]
  28. Wisotzky, E.L.; Rosenthal, J.C.; Hilsmann, A.; Eisert, P.; Uecker, F.C. A multispectral 3D-Endoscope for Cholesteatoma Removal. Curr. Directions Biomed. Eng. 2020, 6. in press. [Google Scholar] [CrossRef]
  29. Wisotzky, E.L.; Kossack, B.; Uecker, F.C.; Arens, P.; Dommerich, S.; Hilsmann, A.; Eisert, P. Validation of two techniques for intraoperative hyperspectral human tissue determination. In Proceedings of the Medical Imaging 2019: Image-Guided Procedures, Robotic Interventions, and Modeling, San Diego, CA, USA, 17–19 February 2019; p. 109511Z. [Google Scholar] [CrossRef]
  30. Rosenthal, J.C.; Gard, N.; Schneider, A.; Eisert, P. Kalibrierung stereoskopischer Systeme für medizinische Messaufgaben. In Proceedings of the 16th Annual CURAC Conference, Hanover, Germany, 5–7 October 2017. [Google Scholar]
  31. Zhang, Z. Flexible camera calibration by viewing a plane from unknown orientations. In Proceedings of the Seventh IEEE International Conference on Computer Vision, Kerkyra, Greece, 20–27 September 1999. [Google Scholar]
  32. Garrido-Jurado, S.; Muñoz-Salinas, R.; Madrid-Cuevas, F.J.; Marín-Jiménez, M.J. Automatic generation and detection of highly reliable fiducial markers under occlusion. Pattern Recognit. 2014, 47, 2280–2292. [Google Scholar] [CrossRef]
  33. Eisert, P. Model-based camera calibration using analysis by synthesis techniques. In Proceedings of the International Workshop on Vision, Modeling, and Visualization, Erlangen, Germany, 20–22 November 2002; pp. 307–314. [Google Scholar]
  34. Zilly, F.; Müller, M.; Eisert, P.; Kauff, P. Joint Estimation of Epipolar Geometry and Rectification Parameters using Point Correspondences for Stereoscopic TV Sequences. In Proceedings of the 3DPVT, Espace Saint Martin, Paris, France, 17–20 May 2010. [Google Scholar]
  35. Waizenegger, W.; Feldmann, I.; Schreer, O.; Kauff, P.; Eisert, P. Real-time 3D body reconstruction for immersive TV. In Proceedings of the IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016; pp. 360–364. [Google Scholar]
  36. Semaan, M.T.; Megerian, C.A. The pathophysiology of cholesteatoma. Otolaryngol. Clin. N. Am. 2006, 39, 1143–1159. [Google Scholar] [CrossRef] [PubMed]
  37. Jackler, R.K.; Santa Maria, P.L.; Varsak, Y.K.; Nguyen, A.; Blevins, N.H. A new theory on the pathogenesis of acquired cholesteatoma: mucosal traction. Laryngoscope 2015, 125, S1–S14. [Google Scholar] [CrossRef]
  38. Hüttenbrink, K.B. A new theory interprets the development of a retraction pocket as a natural self-healing process. Eur. Arch. Oto-Rhino-Laryngol. 2019, 276, 367–373. [Google Scholar] [CrossRef]
  39. Wisotzky, E.L.; Arens, P.; Dommerich, S.; Hilsmann, A.; Eisert, P.; Uecker, F.C. Determination of optical properties of cholesteatoma in the spectral range of 250 to 800 nm. Biomed. Opt. Express 2020, 11, 1489–1500. [Google Scholar] [CrossRef]
  40. Friebel, M.; Roggan, A.; Müller, G.J.; Meinke, M.C. Determination of optical properties of human blood in the spectral range 250 to 1100 nm using Monte Carlo simulations with hematocrit-dependent effective scattering phase functions. J. Biomed. Opt. 2006, 11, 34021. [Google Scholar] [CrossRef]
  41. Wisotzky, E.L.; Uecker, F.C.; Dommerich, S.; Hilsmann, A.; Eisert, P.; Arens, P. Determination of optical properties of human tissues obtained from parotidectomy in the spectral range of 250 to 800 nm. J. Biomed. Opt. 2019, 24, 125001. [Google Scholar] [CrossRef]
  42. Lister, T.; Wright, P.A.; Chappell, P.H. Optical properties of human skin. J. Biomed. Opt. 2012, 17, 090901. [Google Scholar] [CrossRef]
  43. Jacques, S.L. Optical properties of biological tissues: A review. Phys. Med. Biol. 2013, 58, R37–R61. [Google Scholar] [CrossRef] [PubMed]
  44. Putora, I. The sharpness indicator. SMPTE J. 1998, 107, 106–112. [Google Scholar] [CrossRef]
  45. Edmund Optics. Wavelength Effects on Performance. Available online: https://www.edmundoptics.com/knowledge-center/application-notes/imaging/wavelength-effects-on-performance/ (accessed on 16 September 2020).
  46. Koren, N. ISO 12233 Test Chart. Available online: http://www.normankoren.com/Tutorials/MTF.html (accessed on 16 September 2020).
  47. Klein, J.; Brauers, J.; Aach, T. Spatial and spectral analysis and modeling of transversal chromatic aberrations and their compensation. In Proceedings of the Conference on Colour in Graphics, Imaging, and Vision, Joensuu, Finland, 14–17 June 2010; Society for Imaging Science and Technology: Springfield, VA, USA; pp. 516–522. [Google Scholar]
  48. Brauers, J.; Aach, T. Geometric calibration of lens and filter distortions for multispectral filter-wheel cameras. IEEE Trans. Image Process. 2011, 20, 496–506. [Google Scholar] [CrossRef] [PubMed]
  49. Klein, J.; Aach, T. Multispectral filter wheel cameras: Modeling aberrations for filters in front of lens. In Proceedings of the Digital Photography VIII. International Society for Optics and Photonics, Burlingame, CA, USA, 23–24 January 2012; p. 82990R. [Google Scholar]
  50. Klein, J. Multispectral imaging and image processing. In Proceedings of the Image Processing: Algorithms and Systems XII, San Francisco, CA, USA, 3–5 February 2014; International Society for Optics and Photonics: Bellingham, WA, USA; p. 90190Q. [Google Scholar]
  51. Wang, Q.; Cheng, W.C.; Suresh, N.; Hua, H. Development of the local magnification method for quantitative evaluation of endoscope geometric distortion. J. Biomed. Opt. 2016, 21, 56003. [Google Scholar] [CrossRef] [Green Version]
  52. Wilson, B.; Patterson, M. The physics of photodynamic therapy. Phys. Med. Biol. 1986, 31, 327. [Google Scholar] [CrossRef]
  53. Kubby, J.A. Adaptive Optics for Biological Imaging; Taylor & Francis: Boca Raton, FL, USA, 2013. [Google Scholar]
  54. Rosenthal, J.C.; Hu, Z.; Gard, N.; Eisert, P. Endoscopic Vision Challenge—Stereo Correspondence and Reconstruction of Endoscopic Data (SCARED), organized by Intuitive Surgical. In Proceedings of the 22nd International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), Shenzhen, China, 13–17 October 2019. [Google Scholar]
  55. Fei, B. Chapter 3.6—Hyperspectral imaging in medical applications. In Hyperspectral Imaging, Series: Data Handling in Science and Technology, Volume 32; Amigo, J.M., Ed.; Elsevier: Amsterdam, The Netherlands, 2020; pp. 523–565. [Google Scholar] [CrossRef]
  56. Ortega, S.; Halicek, M.; Fabelo, H.; Camacho, R.; Plaza, M.d.L.L.; Godtliebsen, F.; Callicó, G.M.; Fei, B. Hyperspectral Imaging for the Detection of Glioblastoma Tumor Cells in H&E Slides Using Convolutional Neural Networks. Sensors 2020, 20. [Google Scholar] [CrossRef] [Green Version]
  57. Halicek, M.; Dormer, J.D.; Little, J.V.; Chen, A.Y.; Fei, B. Tumor detection of the thyroid and salivary glands using hyperspectral imaging and deep learning. Biomed. Opt. Express 2020, 11, 1383–1400. [Google Scholar] [CrossRef]
  58. Hyperspectral Imaging—Technology Partnership between KARL STORZ and Diaspective Vision, 2020. Available online: https://www.karlstorz.com/gb/en/hyperspektrale-bildgebung-technologiepartnerschaft-zwischen-karl-storz-und-diaspective-vision.htm (accessed on 2 September 2020).
  59. Abdullah, B.; Rasid, N.S.A.; Lazim, N.M.; Volgger, V.; Betz, C.S.; Mohammad, Z.W.; Hassan, N.F.H.N. Ni endoscopic classification for Storz Professional Image Enhancement System (SPIES) endoscopy in the detection of upper aerodigestive tract (UADT) tumours. Sci. Rep. 2020, 10, 1–7. [Google Scholar] [CrossRef]
Figure 1. These plots show (a) the spectrum of the Xe illumination source, which is used for broad-band illumination of the situs, as well as (b) the six filters used from 400 nm to 500 nm for the narrow-band illumination mode. The Xe light source (a) contains cut-off filters at λ = 350 nm, as well as λ = 700 nm, and the illumination intensity I is normalized. In (b), the specific spectral radiance for each filter is presented to show the differences between the filter intensities.
Figure 1. These plots show (a) the spectrum of the Xe illumination source, which is used for broad-band illumination of the situs, as well as (b) the six filters used from 400 nm to 500 nm for the narrow-band illumination mode. The Xe light source (a) contains cut-off filters at λ = 350 nm, as well as λ = 700 nm, and the illumination intensity I is normalized. In (b), the specific spectral radiance for each filter is presented to show the differences between the filter intensities.
Sensors 20 05334 g001
Figure 2. Stereo-spectral system setup: (A) the Xe light source with the filter-wheel, (B) the capturing unit, and (C) the endoscopic head.
Figure 2. Stereo-spectral system setup: (A) the Xe light source with the filter-wheel, (B) the capturing unit, and (C) the endoscopic head.
Sensors 20 05334 g002
Figure 3. CT scan of Patient #1. (Left) Showing the complete CT slice. (Top right) Magnified CT-slice showing the patient’s left middle ear anatomy. In this view, (A) bone, (B) cholesteatoma, and (C) the handle of malleus covered with cholesteatoma. (Bottom right) Magnification of a 2nd CT-slice with (A) bone, (B) cholesteatoma, (D) connective tissue in the pre-surgical removed canal wall, and (E) the nervus facialis embedded in bone.
Figure 3. CT scan of Patient #1. (Left) Showing the complete CT slice. (Top right) Magnified CT-slice showing the patient’s left middle ear anatomy. In this view, (A) bone, (B) cholesteatoma, and (C) the handle of malleus covered with cholesteatoma. (Bottom right) Magnification of a 2nd CT-slice with (A) bone, (B) cholesteatoma, (D) connective tissue in the pre-surgical removed canal wall, and (E) the nervus facialis embedded in bone.
Sensors 20 05334 g003
Figure 4. This endoscopic image shows the situs of the first patient captured at the beginning of the spectral data acquisition. In front, a large area of bone structure (A) is visible. Fragments of cholesteatoma (B) are present at different parts of the image. The round window niche (C) is visible in the background.
Figure 4. This endoscopic image shows the situs of the first patient captured at the beginning of the spectral data acquisition. In front, a large area of bone structure (A) is visible. Fragments of cholesteatoma (B) are present at different parts of the image. The round window niche (C) is visible in the background.
Sensors 20 05334 g004
Figure 5. These images (af) show one complete captured sequence of the spectral images of Patient 2.
Figure 5. These images (af) show one complete captured sequence of the spectral images of Patient 2.
Sensors 20 05334 g005
Figure 6. These images show multispectral filter-wheel acquisitions scanning the central circle pattern of the sharpness indicator by Putora [44]. (a) White light illumination and (b) 600 nm illumination: all circles are clearly visible, indicating a high degree of sharpness (DS) of 108.9 lp/mm; (c) 460 nm illumination: only thickest lines are visible with a computed DS of 62.3 lp/mm.
Figure 6. These images show multispectral filter-wheel acquisitions scanning the central circle pattern of the sharpness indicator by Putora [44]. (a) White light illumination and (b) 600 nm illumination: all circles are clearly visible, indicating a high degree of sharpness (DS) of 108.9 lp/mm; (c) 460 nm illumination: only thickest lines are visible with a computed DS of 62.3 lp/mm.
Sensors 20 05334 g006
Figure 7. Stereo image pairs in anaglyph mode near the convergence plane showing error values before and after applying stereo lens distortion correction. (a) Distorted and non-rectified stereo image pairs. (b) Rectified stereo image pairs with eliminated lens distortion and correct visualization of straight lines.
Figure 7. Stereo image pairs in anaglyph mode near the convergence plane showing error values before and after applying stereo lens distortion correction. (a) Distorted and non-rectified stereo image pairs. (b) Rectified stereo image pairs with eliminated lens distortion and correct visualization of straight lines.
Sensors 20 05334 g007
Figure 8. Captured data of Patient #2: (a) spectrally segmented cholesteatoma information with λ = 420 nm illumination; (b) normal RGB view with broad-band illumination; (c) enhanced and augmented spectral RGB view.
Figure 8. Captured data of Patient #2: (a) spectrally segmented cholesteatoma information with λ = 420 nm illumination; (b) normal RGB view with broad-band illumination; (c) enhanced and augmented spectral RGB view.
Sensors 20 05334 g008
Figure 9. Augmented cholesteatoma visualization for the study group of three patients: (a,b) Visualizations using the spectral information at λ = 420 nm. (c) Visualizations using the spectral information at λ = 400 nm.
Figure 9. Augmented cholesteatoma visualization for the study group of three patients: (a,b) Visualizations using the spectral information at λ = 420 nm. (c) Visualizations using the spectral information at λ = 400 nm.
Sensors 20 05334 g009
Figure 10. Importance of specular reflectance correction: (a) Enhanced RGB cholesteatoma visualization without specular correction showing wrongly detected regions. (b) Enhanced RGB cholesteatoma visualization with specular reflectance correction.
Figure 10. Importance of specular reflectance correction: (a) Enhanced RGB cholesteatoma visualization without specular correction showing wrongly detected regions. (b) Enhanced RGB cholesteatoma visualization with specular reflectance correction.
Sensors 20 05334 g010
Figure 11. The comparison of the measured distances with ground truth data shows a high measurement accuracy: (a) specimen evaluation showing ground truth distances and (b) image based measurement results with color encoded depth information (red indicates far range and blue close range). The accuracy consideration in (c) shows an increased accuracy for larger working distances through sub-pixel (1/5) accurate stereo correspondences, which is of high importance for surgical intervention as typical working distances are between 20 and 60 mm.
Figure 11. The comparison of the measured distances with ground truth data shows a high measurement accuracy: (a) specimen evaluation showing ground truth distances and (b) image based measurement results with color encoded depth information (red indicates far range and blue close range). The accuracy consideration in (c) shows an increased accuracy for larger working distances through sub-pixel (1/5) accurate stereo correspondences, which is of high importance for surgical intervention as typical working distances are between 20 and 60 mm.
Sensors 20 05334 g011
Figure 12. Stereo-spectral image pair with highlighted cholesteatoma tissue.
Figure 12. Stereo-spectral image pair with highlighted cholesteatoma tissue.
Sensors 20 05334 g012
Figure 13. 3D reconstructed spectral point cloud indicating spatial and depth related spread of cholesteatoma tissue. The point clouds consist of approximately 1.8 million vertices. Gaps in the 3D model occur from occlusion and missing stereo correspondences. (Top row) Patient #1; (Middle row) Patient #2; Bottom row) Patient #3.
Figure 13. 3D reconstructed spectral point cloud indicating spatial and depth related spread of cholesteatoma tissue. The point clouds consist of approximately 1.8 million vertices. Gaps in the 3D model occur from occlusion and missing stereo correspondences. (Top row) Patient #1; (Middle row) Patient #2; Bottom row) Patient #3.
Sensors 20 05334 g013
Figure 14. The preoperative CT segmentation from Patient #1 (Top row) and Patient #2 (Middle and Bottom rows) give a first good indicator about the size, shape, and volume of the cholesteatoma.
Figure 14. The preoperative CT segmentation from Patient #1 (Top row) and Patient #2 (Middle and Bottom rows) give a first good indicator about the size, shape, and volume of the cholesteatoma.
Sensors 20 05334 g014
Table 1. Specifications of the 3D-endoscope camera: sensor and calibrated lens values.
Table 1. Specifications of the 3D-endoscope camera: sensor and calibrated lens values.
Output ResolutionInterlaced1920 × 1080 px
Frame Rate 25 fps
Endoscope Diameter approximately 9.4 mm
Light SourceXenon (Xe)300 W
Focal Length 4.63 mm
Interaxial Distance 4.16 mm
Table 2. The 50 % and 10 % MTF frequencies of all filters used.
Table 2. The 50 % and 10 % MTF frequencies of all filters used.
IlluminationMTF 50 % MTF 10 %
white84.1 lp/mm99.5 lp/mm
400 nm40.2 lp/mm65.8 lp/mm
420 nm34.7 lp/mm60.7 lp/mm
440 nm35.9 lp/mm57.8 lp/mm
460 nm44.3 lp/mm63.0 lp/mm
480 nm51.0 lp/mm98.4 lp/mm
500 nm53.8 lp/mm106.9 lp/mm
520 nm47.9 lp/mm98.4 lp/mm
540 nm46.0 lp/mm98.4 lp/mm
560 nm47.4 lp/mm98.5 lp/mm
580 nm47.8 lp/mm99.1 lp/mm
600 nm52.0 lp/mm98.7 lp/mm
620 nm51.6 lp/mm99.3 lp/mm
640 nm47.0 lp/mm94.9 lp/mm
660 nm45.5 lp/mm94.4 lp/mm

Share and Cite

MDPI and ACS Style

Wisotzky, E.L.; Rosenthal, J.-C.; Wege, U.; Hilsmann, A.; Eisert, P.; Uecker, F.C. Surgical Guidance for Removal of Cholesteatoma Using a Multispectral 3D-Endoscope. Sensors 2020, 20, 5334. https://doi.org/10.3390/s20185334

AMA Style

Wisotzky EL, Rosenthal J-C, Wege U, Hilsmann A, Eisert P, Uecker FC. Surgical Guidance for Removal of Cholesteatoma Using a Multispectral 3D-Endoscope. Sensors. 2020; 20(18):5334. https://doi.org/10.3390/s20185334

Chicago/Turabian Style

Wisotzky, Eric L., Jean-Claude Rosenthal, Ulla Wege, Anna Hilsmann, Peter Eisert, and Florian C. Uecker. 2020. "Surgical Guidance for Removal of Cholesteatoma Using a Multispectral 3D-Endoscope" Sensors 20, no. 18: 5334. https://doi.org/10.3390/s20185334

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop