Skip to content
BY 4.0 license Open Access Published by De Gruyter February 12, 2021

Spectral imaging and spectral LIDAR systems: moving toward compact nanophotonics-based sensing

  • Nanxi Li ORCID logo EMAIL logo , Chong Pei Ho , I-Ting Wang , Prakash Pitchappa , Yuan Hsing Fu ORCID logo , Yao Zhu and Lennon Yao Ting Lee
From the journal Nanophotonics

Abstract

With the emerging trend of big data and internet-of-things, sensors with compact size, low cost and robust performance are highly desirable. Spectral imaging and spectral LIDAR systems enable measurement of spectral and 3D information of the ambient environment. These systems have been widely applied in different areas including environmental monitoring, autonomous driving, biomedical imaging, biometric identification, archaeology and art conservation. In this review, modern applications of state-of-the-art spectral imaging and spectral LIDAR systems in the past decade have been summarized and presented. Furthermore, the progress in the development of compact spectral imaging and LIDAR sensing systems has also been reviewed. These systems are based on the nanophotonics technology. The most updated research works on subwavelength scale nanostructure-based functional devices for spectral imaging and optical frequency comb-based LIDAR sensing works have been reviewed. These compact systems will drive the translation of spectral imaging and LIDAR sensing from table-top toward portable solutions for consumer electronics applications. In addition, the future perspectives on nanophotonics-based spectral imaging and LIDAR sensing are also presented.

1 Introduction

Optical imaging and sensing systems are key components in industrial automation and consumer electronics. The wide distribution of these sensing systems enables data generation to meet the emerging global trend of big data and internet-of-things [1]. In order to sense the spectral information of the object, spectral imaging technology has been developed and widely applied. The spectral imager collects 2-dimensional (2D) images of the object at different wavelengths and forms an imaging stack. Hence, at each pixel, there is a data cube containing the spectral information of the object corresponding to the selected pixel location. From this spectral information, the material or chemical composition of the object can be determined. Depending on the number of spectral bands within the stack or data cube, spectral imaging can be subcategorized into multispectral and hyperspectral imaging, which typically contains 3–10 bands and dozens to hundreds of bands, respectively [2]. The spectral imaging technology, which is able to obtain both spatial and spectral information, was originally applied in Earth remote sensing [3]. Currently, it has been widely utilized in remote and indoor sensing, covering from Earth observation, geo-information study [4], [5], [6], [7] to optical sorting and pharmaceutical analysis [8], [9].

In addition to spectral information, to sense the 3-dimensional (3D) information of the object, light detection and ranging (LIDAR) technology provides an effective solution. LIDAR system primarily consists of a light source and a detector. By tracking the reflected signal from the object in ambient environment, the location and velocity information of the object can be obtained. The location information can then be used to reconstruct the 3D image of the object. LIDAR technology has been widely used in advanced driver-assistance systems (ADAS), autonomous driving and 3D sensing. It has become the eyes of robotics and cars to sense the ambient environment. The LIDAR technology has also been combined with the aforementioned spectral imaging technology to realize spectral LIDAR sensing systems [10], [11], [12], [13]. It can be used to determine the shape as well as the material composition of the objects, as different materials have unique reflectance in the optical spectrum. For example, the spectral reflectance of various plant species [14], gravel grain sizes [15], asphalt surfaces [16] are different and hence can be distinguished by using a multispectral imaging system.

Modern applications of spectral imaging and spectral LIDAR systems include environmental monitoring [3], [10], [11], [17], autonomous driving [18], [19], [20], biomedical imaging [2], [21], [22], biometric identification [23], [24], archaeology and art conservation [25], [26], as illustrated in Figure 1 left panel. These applications are enabled by the current state-of-the-art spectral imaging and spectral LIDAR systems. Also, there is a growing trend to make these systems more compact, lighter weight and with lower power consumption. The nanophotonics technology, with the capability to provide chip-scale high-performance functional devices, has been exploited to meet this emerging trend [27], [28], [29]. Comprehensive reviews on spectral imaging technologies and their applications have been reported before [2], [22], [25], [30], [31], [32]. However, the progress report on nanophotonics-based spectral imaging and LIDAR sensing systems is lacking. In this review, we summarized the recent research works on spectral imaging and spectral LIDAR systems, including the nanophotonics-based sensing systems. The modern applications of the current state-of-the-art spectral imaging and spectral LIDAR systems are presented in Section 2. A summary table categorizing the recent research works in the past decade based on application, sensing mechanism, sensor type and working wavelength is presented. Following that, in Section 3, the progress in recent development of nanophotonics-based spectral imaging and LIDAR sensing systems are reviewed and presented. A summary table has also been made based on the nanostructured material, sensing mechanism, application and wavelength. Finally, in Section 4, a summary of the review work and the outlook of future research directions in spectral imaging and LIDAR sensing systems are presented. The overview of the content has been illustrated in Figure 1.

Figure 1: Overview of spectral imaging and spectral LIDAR systems, applications and future outlook.Left panel: Modern applications chart of the state-of-the-art spectral imaging and spectral LIDAR sensing systems. Inset images: (top left) 2 dimensional (2D) multispectral images of urban area, adapted with permission from the study by Morsy et al. [11]. Licensed under a Creative Commons Attribution. (top and bottom right) A point cloud captured by line-scanning LIDAR system and schematic of LIDAR measurement setup, both adapted from the study by Taher [18] with permission. Copyright Josef Taher, Finnish Geospatial Research Institute FGI. (bottom left and middle) Schematic of multispectral facial recognition system setup and light source, both are adapted with permission from the study by Steiner et al. [23]. Licensed under a Creative Commons Attribution. Middle panel: Nanophotonics-based sensing systems. Inset images: (top and middle) Scanning electron microscopy (SEM) images of the fabricated color filters and optical image of color filters integrated with detector array, both are adapted with permission from the study by Shah et al. [33]. Licensed under a Creative Commons Attribution. (bottom) Schematic of dual-comb-based LIDAR system, adapted from the study by Trocha et al. [34]. Reprinted with permission from AAAS. Right panel: Outlook of the future development work for compact spectral imaging and LIDAR sensing systems.
Figure 1:

Overview of spectral imaging and spectral LIDAR systems, applications and future outlook.

Left panel: Modern applications chart of the state-of-the-art spectral imaging and spectral LIDAR sensing systems. Inset images: (top left) 2 dimensional (2D) multispectral images of urban area, adapted with permission from the study by Morsy et al. [11]. Licensed under a Creative Commons Attribution. (top and bottom right) A point cloud captured by line-scanning LIDAR system and schematic of LIDAR measurement setup, both adapted from the study by Taher [18] with permission. Copyright Josef Taher, Finnish Geospatial Research Institute FGI. (bottom left and middle) Schematic of multispectral facial recognition system setup and light source, both are adapted with permission from the study by Steiner et al. [23]. Licensed under a Creative Commons Attribution. Middle panel: Nanophotonics-based sensing systems. Inset images: (top and middle) Scanning electron microscopy (SEM) images of the fabricated color filters and optical image of color filters integrated with detector array, both are adapted with permission from the study by Shah et al. [33]. Licensed under a Creative Commons Attribution. (bottom) Schematic of dual-comb-based LIDAR system, adapted from the study by Trocha et al. [34]. Reprinted with permission from AAAS. Right panel: Outlook of the future development work for compact spectral imaging and LIDAR sensing systems.

2 Modern applications of the state-of-the-art spectral imaging and spectral LIDAR sensing systems

In this section, the modern applications using the state-of-the-art spectral imaging and spectral LIDAR systems have been reviewed and presented. The following subsections are categorized based on the main application areas. Subsections 2.1 and 2.2 focus on remote or outdoor sensing, while Subsections 2.3–2.5 cover close-range or indoor sensing. All the research works reviewed in this section are published in the past 10 years. These reviewed works are summarized and listed in Table 1.

Table 1:

Summary of the current state-of-the-art spectral imaging and spectral LIDAR sensing systems for modern applications.

ApplicationSensing mechanismSensorWavelengthReference/year
Environment monitoring (forest)Multispectral LIDARAquarius (532 nm), Gemini (1064 nm), Orion C (1550 nm),

Titan (532 nm, 1064 and 1550 nm)
1550, 532 and 1064 nm[10]/2016
Environment monitoring (urban area classification)Multispectral LIDAROptech TitanChannel 1 = 1550 nm;

Channel 2 = 1064 nm;

Channel 3 = 532 nm;
[11]/2017

[12]/2018
Environment monitoring (precision agriculture)Reconstruct 3D model from multispectral and RGB imagesMultispectral: Parrot Sequoia (1280 × 960)

RGB: Sony Alpha 7RIII (48 megapixels)
Multispectral: green (530–570 nm), red (640–680 nm), red-edge (730–740 nm), near infrared (NIR) (770–810 nm)[35]/2020
Environment monitoring (earthquake vulnerability estimation)Multispectral LIDARLandsat Operational Land Imager (OLI) and Landsat Thermal Infra-Red Scanner (TIRS)Visible, NIR, SWIR, and MIR[36]/2019
Environment monitoring (aquatic ecosystem)Hyperspectral LIDARScheimpflug LIDAR with 2D array charge-coupled device (CCD) detector430 –700 nm[13]/2016
Autonomous driving (asphalt road, gravel road, highway, parking lot prediction)Multispectral LIDARRIEGL VUX-1HA (1550 nm LIDAR)

RIEGL miniVUX-1UAV (905 nm LIDAR)
905 and 1550 nm[18]/2019
Autonomous driving (object detection in traffic scenes, e.g., bike, car)Multispectral imagingRGB, NIR, MIR and far infrared (FIR) cameraVisible, NIR, MIR, FIR[19]/2017
Autonomous driving (object detection, drivable region detection, depth estimation)Multispectral imaging, and single wavelength LIDARRGB/Thermal camera; RGB stereo; LIDARVisible, Long-wavelength infrared (LWIR)[20]/2018
Biomedical imaging (brain tumor delineation)Multispectral optoacoustic tomography imagingMultispectral optoacoustic tomography scanner inVision 256-TF, iThera Medical GmbHNIR: 700–900 nm[37]/2018
Biomedical imaging (Alzheimer’s disease visualization)Multispectral optoacoustic tomography imagingMultispectral optoacoustic tomography scanner inVision 512-echo system, iThera Medical GmbHNIR: 660–1300 nm[38]/2019
Biomedical imaging (liver-tumor inspection)Multispectral fluorescence imagingVisible and NIR-I: Camware 4, PCO AG

NIR-II: LightField 6, Teledyne Princeton Instruments
Visible;

NIR-I: 700–900 nm;

NIR-II: 1000–1700 nm
[21]/2020
Biometric identification (skin detection and facial recognition)Multispectral imagingLight-emitting diode (LED) light source;

SWIR camera with InGaAs sensor
SWIR (935, 1060, 1300, and 1550 nm)[23]/2016
Biometric identification (facial recognition)Multispectral imagingTwo quartz tungsten halogen lamps as light source; complementary metal-oxide-semiconductor (CMOS) cameraVisible to NIR (530, 590, 650, 710, 770, 830, 890, 950, and 1000 nm)[24]/2016
Biometric identification (iris recognition)Multispectral imagingLED array, SONY CCD cameraNIR (700, 780 and 850 nm)[39]/2016
Biometric identification (fingerprint recognition)Multispectral imagingLED light source, InGaAs sensorSWIR (1200, 1300, 1450, and 1550 nm)[40]/2018
Biometric identification (palm recognition)Multispectral imagingLED light source, CCD cameraVisible (470, 525, and 660 nm) and NIR (880 nm)[41]/2016
Archaeology and art conservation (painting analysis)Hyperspectral imagingSPECIM hyperspectral (HS-XX-V10E) CCD camera400–1000 nm[26]/2016
Archaeology and art conservation (Islamic paper characterization)Hyperspectral imagingGILDEN Photonics hyperspectral imaging scanner1000–2500 nm[42]/2017
Archaeology and art conservationHyperspectral imagingSPECIM IQ hyperspectral camera400–1000 nm[43]/2018

2.1 Environment monitoring

Environment monitoring is the first application area that adopted spectral imaging solutions [3]. Over the past decade, with the advancements and wide applications of LIDAR systems, the multispectral LIDAR technology has been implemented for environment monitoring purpose as well. For example, in the study by Hopkinson et al. [10], the airborne LIDAR system (Teledyne Optech) is implemented for the characterization and classification of forest environment. In addition to the conventional 1064 nm single wavelength LIDAR system, 1550 and 532 nm wavelengths are also used for multispectral LIDAR sensing. Such sensing system provides improvements in land surface classification and vertical foliage partitioning. Furthermore, multispectral LIDAR has also been used for urban area classification, as reported in the studies by Morsy et al. and Huo et al. [11], [12]. In these reports, commercially available multispectral LIDAR sensors from Teledyne Optech and RIEGL Laser Measurement Systems, covering from visible wavelength (532 nm) to short wavelength infrared (SWIR) (1550 nm) are employed to generate multispectral LIDAR data. Different approaches are applied to classify areas (e.g., grass, roads, trees and buildings) within the urban area. In the study by Morsy et al. [11], the normalized difference vegetation indices (NDVI) computation is conducted for point-based classification of multispectral LIDAR data. In Figure 2(a), left and right panels show the 2D and 3D view of classified LIDAR points, respectively. The figures are based on NDVI computation using the recorded intensity at 532 and 1064 nm wavelength, which gives the overall accuracy of 92.7%.

Figure 2: Airborne multispectral environment sensing and monitoring.(a) 2D and 3D multispectral images for urban area classification. These images are based on normalized difference vegetation indices (NDVI) computation using the recorded intensity at 532 and 1064 nm wavelength. (b) RGB and multispectral images of olive orchard and the imaging process flow for olive tree analysis. (a) and (b) are adapted with permission from the studies by Morsy et al. [11] and Jurado et al. [35], respectively. Both are licensed under a Creative Commons Attribution.
Figure 2:

Airborne multispectral environment sensing and monitoring.

(a) 2D and 3D multispectral images for urban area classification. These images are based on normalized difference vegetation indices (NDVI) computation using the recorded intensity at 532 and 1064 nm wavelength. (b) RGB and multispectral images of olive orchard and the imaging process flow for olive tree analysis. (a) and (b) are adapted with permission from the studies by Morsy et al. [11] and Jurado et al. [35], respectively. Both are licensed under a Creative Commons Attribution.

Alternative to multispectral LIDAR approach, in the study by Jurado et al. [35], a more cost-effective method, photogrammetry, is used to construct 3D images of olive trees. A high resolution camera is mounted on an unmanned aerial vehicle (UAV) to take multispectral images which are then reconstructed into 3D images. The multispectral images and RGB point clouds are fused to study an olive orchard. The methodology is illustrated in the scheme shown in Figure 2(b). It starts with the 3D reconstruction of both RGB and multispectral images as the first step. Following that, the reflectance maps are generated from the multispectral images (step two). These reflectance maps are used to enrich the 3D reconstructed images after alignment process, as shown in the third and fourth steps. After that, each olive tree has been segmented for morphological information extraction and temporal analysis. In addition to the airborne sensors mentioned above, spaceborne sensors have also been recently implemented for multispectral sensing. In the study by Torres et al. [36], sensors are mounted on a satellite to capture the multispectral images covering from visible to mid-infrared (MIR) wavelength range for earthquake vulnerability estimation.

One more point worth mentioning is that the multispectral images taken from environment can also be used for military and mineral mapping purposes. In military, the spectral imaging system provides information on 3D land cover of the battlefield [44]. The spectral information also facilitates the detection of target in varies degrees of camouflage [45]. Also, in mineral mapping, the spectral information enables identification of various mineral materials from the airborne hyperspectral images [5], [6], [7].

2.2 Autonomous driving

Currently, LIDAR systems have been widely used for autonomous driving. Most of the commercial LIDAR systems for autonomous driving are based on single wavelength, which may not be as reliable as multispectral systems, as the environmental condition sometimes might have strong absorption in that single working wavelength. Also, many machine learning methods provide more accurate predictions when the input data is consistent without variations [18]. The multispectral LIDAR, which is based on the reflection from the object surface covering different wavelengths including IR, will not have large variations under different ambient conditions such as illumination. In addition, the multispectral sensing systems can also provide the material information due to the spectral fingerprint of different materials. A typical multispectral LIDAR road image is shown in Figure 3(a), which is captured at 905 and 1550 nm wavelengths using RIEGL VUX-1HA line scanning LIDAR and RIEGL miniVUX-1UAV line scanning LIDAR, respectively [18]. Within the image, the two-lane road can be clearly seen with details including road markers, road shoulders and trees around the road.

Figure 3: Multispectral LIDAR system applied for autonomous driving.(a) A point cloud captured by 1550 and 905 nm line-scanning LIDAR systems RIEGL VUX-1HA and RIEGL miniVUX-1UAV, respectively. (b) Schematic of LIDAR measurement setup, showing a multispectral LIDAR system, visible camera, and GPS mounted on top of the vehicle for data collection. (c) Road area prediction examples in comparison with the ground truth. (a)–(c) are adapted from the study by Taher [18] with permission. Copyright Josef Taher, Finnish Geospatial Research Institute FGI.
Figure 3:

Multispectral LIDAR system applied for autonomous driving.

(a) A point cloud captured by 1550 and 905 nm line-scanning LIDAR systems RIEGL VUX-1HA and RIEGL miniVUX-1UAV, respectively. (b) Schematic of LIDAR measurement setup, showing a multispectral LIDAR system, visible camera, and GPS mounted on top of the vehicle for data collection. (c) Road area prediction examples in comparison with the ground truth. (a)–(c) are adapted from the study by Taher [18] with permission. Copyright Josef Taher, Finnish Geospatial Research Institute FGI.

The schematic of the multispectral imaging setup is shown in Figure 3(b). The multispectral LIDAR system together with an imaging system have been mounted on top of a vehicle. The multispectral road data have been collected for road area semantic segmentation. Different road areas, including asphalt road, gravel road, highway and parking lot, can be correctly predicted. The prediction areas have been overlaid on top of the multispectral LIDAR images and compared with the ground truth, as shown in Figure 3(c).

Additionally, a thermal camera is used as a secondary vision sensor in the study by Choi et al. [20], and brings along the advantage of capturing road images regardless of daylight illumination condition. The integration of the 3D LIDAR (Velodyne HDL-32E) and the GPS/IMU (global positioning system/inertial measurement unit) onto the same sensor system enables capturing of the depth information and location information, respectively. Furthermore, in the study by Takumi et al. [19], multispectral images covering visible, near infrared (NIR), MIR and far infrared (FIR) wavelength range are collected. These images are used for object detection in traffic scenes, including bike, car and pedestrian. It has been found that the images at different spectral range are suitable for detection of different classes of object. Hence, the advantage of multispectral imaging for diversified object detection has been demonstrated in the work.

2.3 Biomedical spectral imaging and sensing

As mentioned in Section 1, spectral imaging system can capture an image stack by using different wavelengths, and obtain the spectral information, including reflectance and transmittance, for each pixel in the image from the data cube. Such information can be used to monitor the changes of biosamples that cannot be obtained using the traditional gray-scale or RGB imaging technique [2]. The principle is based on the spectral signature of different biosamples, which originates from the interaction between the multiwavelength electromagnetic waves and the biomolecules. In order to obtain the spectral information from biosamples, there are four main scanning approaches used: whiskbroom (spatial scan on both axes), pushbroom (spatial scan on one axis), staring (spectral scan) and snapshot (no scan). The study by Li et al. [2] presents a good summary and comparison of the advantages and disadvantages for these scanning approaches.

In medical application, spectral imaging is mainly used for disease diagnosis and surgical guidance [22]. For disease diagnosis, multispectral optoacoustic tomography (MSOT) is an emerging technology, enabled by the development of NIR high-speed tunable lasers [46]. It enables in-depth high-resolution imaging and spectral information of tissue molecules. A recent example reported in the study by Neuschmelting et al. [37] uses MSOT for brain tumor delineation. The stability of the nanostar contrast agent is also found through the MSOT spectra in the NIR wavelength regime. Besides brain tumor, MSOT has also been recently applied for visualization of Alzheimer’s disease in mouse’s brain [38] and pathophysiological procession [47]. Furthermore, a hyperspectral endoscopy system has been developed [48], which enables image distortion compensation for flexible endoscopy in clinical use. High spatial and spectral resolutions have been achieved under freehand motion.

For surgical guidance application, a recent example is reported in the study by Hu et al. [21]. In this work, multispectral fluorescence imaging in visible, NIR-I (700–900 nm) and NIR-II (1000–1700 nm) wavelength ranges have been used to monitor the in-human liver-tumor surgery. The work shows that the NIR-II wavelength regime is able to provide tumor detection with higher sensitivity, signal distinction ratio and detection rate compared with the traditional NIR-I wavelength regime. Also, with the development of artificial intelligence and machine learning technology, it has recently been applied in the hyperspectral imaging system for precise cancer tumor detection during surgical operations [49], [50]. For example, in the study by Fabelo et al. [49], a classification method for hyperspectral imaging system has been developed to accurately determine the boundaries of brain tumor during the surgical operation, which is able to help the surgeon to avoid extra excision of the normal brain tissue or leaving residual tumor unintentionally.

2.4 Biometric sensor systems

Biometric sensors have drawn an increased attention, due to its wide applications covering from homeland security to consumer electronics. Multispectral biometric system enables the capturing of biometric data under different illumination levels, with antispoofing functionality and resistant to weather and environmental changes. These biometric data are typically taken from face, finger, palm or iris, followed by pattern recognition from 2D images captured under different spectral bands.

Within a multispectral facial recognition system, the wavelength range plays a significant role. Figure 4(a) shows the remission spectrum of different skin types and spoofing mask materials from visible to NIR wavelength regime. It can be observed that human skin has relatively lower remission at the wavelength beyond visible range [23]. Hence, such wavelength range can be used to sense and distinguish skin from different materials for presentation attack detection purpose. Also, different skin colors have very similar remission across the wavelength between 900 and 1600 nm. Therefore, facial recognition operating in such wavelength range will not be affected by the skin tone. Based on the remission (or reflection) data mentioned earlier, in the study by Steiner et al. [23], the multispectral imaging at SWIR has been applied for face recognition and skin authentication. The images at wavelengths of 935, 1060, 1300 and 1550 nm are used simultaneously for antispoofing purpose. The schematic of the imaging system is illustrated in Figure 4(b). The light-emitting diodes (LEDs) with different wavelengths are distributed on the illumination panel around the sensor camera. These LEDs are programmed using microcontroller to switch on/off in chronological order, with only one wavelength switched on at any given time. An SWIR camera (with InGaAs sensor) is placed in the center of the LED array to collect the reflected light from the human face. The images captured by the camera are transmitted to personal computer, and processed to compose a multispectral image stack. The image processing steps include nonlinear correction, motion compensation, distortion correction and normalization.

Figure 4: Multispectral imaging applied for facial recognition showing advantage of anti-spoofing.(a) Remission spectrum of different skin types and spoofing mask materials from visible to NIR wavelength. (b) Schematic of multispectral facial recognition system setup, including LED arrays at four different wavelengths (935, 1060, 1300 and 1550 nm) as light source (right panel), and a short wavelength infrared (SWIR) camera for image capture. (c) Facial images captured with visible (first row) and SWIR (second row) wavelengths of different skin types. SWIR images are insensitive to different skin tones. (d) Facial images with a printed mask in visible (top) and SWIR (bottom) wavelength. The mask material can be clearly distinguished in SWIR image. (a)–(d) are adapted with permission from the study by Steiner et al. [23]. Licensed under a Creative Commons Attribution.
Figure 4:

Multispectral imaging applied for facial recognition showing advantage of anti-spoofing.

(a) Remission spectrum of different skin types and spoofing mask materials from visible to NIR wavelength. (b) Schematic of multispectral facial recognition system setup, including LED arrays at four different wavelengths (935, 1060, 1300 and 1550 nm) as light source (right panel), and a short wavelength infrared (SWIR) camera for image capture. (c) Facial images captured with visible (first row) and SWIR (second row) wavelengths of different skin types. SWIR images are insensitive to different skin tones. (d) Facial images with a printed mask in visible (top) and SWIR (bottom) wavelength. The mask material can be clearly distinguished in SWIR image. (a)–(d) are adapted with permission from the study by Steiner et al. [23]. Licensed under a Creative Commons Attribution.

A part of the images taken for facial recognition are illustrated in Figure 4(c) and (d). In Figure 4(c), the first and second rows are the facial images taken in visible and SWIR wavelength, respectively. By comparing the images in these two rows, it can be observed that the SWIR images are insensitive to different skin tones, due to the similar remission spectrum in SWIR wavelength region as shown in Figure 4(a). Furthermore, Figure 4(d) shows a human face wearing 3D-printed mask acting as presentation attack. The image in SWIR can clearly distinguish the human skin and printed mask, and hence illustrates the advantage of antispoofing capability of SWIR wavelength regime.

Furthermore, in the study by Vetrekar et al. [24], a low cost multispectral facial recognition system has been demonstrated. The system consists of a filter wheel with nine different bands covering from 530 nm up to 1000 nm, and mounted in front of a complementary metal-oxide-semiconductor (CMOS) camera. The multispectral images taken from the CMOS camera are then fused using image fusion techniques, including wavelet decomposition, averaging, and then inverse wavelet transform. The multispectral feature of the system enables the reduction of the illumination effect compared with single spectral system. An additional noteworthy point to mention on facial recognition is that the COVID-19 outbreak will further boost the need for facial recognition technology due to its contactless detection scheme, which will effectively address the hygiene and infection-related issues.

Besides facial recognition, the fingerprint sensor is also one of the most widely deployed sensors for biometric identification. Different physical mechanisms have been implemented to capture the fingerprint information including optical imaging, capacitive imaging and ultrasonic sensing [51], [52], [53], [54]. However, most systems have issues in detection under various circumstances, such as wet/dry finger, poor contact and susceptibility to spoofing. The multispectral fingerprint system addresses these issues effectively. It is able to capture the images under different optical wavelengths and collect data on both surface and subsurface, contributed by the fact that different wavelengths have different penetration depths within the finger skin. The subsurface information also can tell whether the fingerprint is from a real finger or a fake one with only 2D information. The working principle of multispectral fingerprint imaging has been presented in the study by Rowe et al. [55]. A commercial multispectral fingerprint sensing product (J110 MSI) has also been presented in the study by Rowe et al. [55] based on the working principle introduced. The commercial product has four LEDs at 430, 530, 630 nm and white light. There is also an embedded processor within the system for data processing.

Furthermore, due to COVID-19, as mentioned earlier in facial recognition part, touchless fingerprint imaging system will be highly attractive, since it helps to prevent the spread of disease through use of same systems by multiusers such as lift buttons. In the study by Hussein et al. [40], a novel touchless fingerprint capture device has been introduced, using multispectral SWIR imaging and laser speckle contrast imaging for sensing and presentation attack detection.

For iris recognition, in the study by Zhang et al. [39], a multispectral imaging system has been introduced. It extends the traditional 850 nm wavelength for iris recognition to shorter wavelength in which pigments can be used as another source of iris texture. The schematic of the system is illustrated in Figure 5(a) left panel. It contains capture unit, illumination unit, interaction unit and control unit. The data collection process is illustrated in the photograph provided in Figure 5(a) right panel. The system is able to capture the multispectral images in 2–3 s. The captured multispectral images at 700, 780 and 850 nm are shown in Figure 5(b). These images are fused to form the final measurement image including all the pigment information obtained from the three different wavelengths.

Figure 5: Multispectral iris recognition system and data fusion process.(a) Schematic drawing of multispectral iris capture device (left) and an optical image showing data collection process (right). (b) Fusing process of multispectral iris images at 700, 780 and 850 nm, including all the pigment information within iris. (a)–(b) are adapted with permission from Springer Nature: Multispectral Biometrics, by Zhang et al. [39]. Copyright 2016.
Figure 5:

Multispectral iris recognition system and data fusion process.

(a) Schematic drawing of multispectral iris capture device (left) and an optical image showing data collection process (right). (b) Fusing process of multispectral iris images at 700, 780 and 850 nm, including all the pigment information within iris. (a)–(b) are adapted with permission from Springer Nature: Multispectral Biometrics, by Zhang et al. [39]. Copyright 2016.

Palmprint is also a unique biometric character, which can be applied to authentication system. Zhang et al. [41] proposed an online multispectral palmprint system for real-time authentication. Since different band contains different texture information, the combining of the bands enables detection with reduced error rate and antispoofing functionality.

2.5 Archaeology and art conservation

Spectral imaging has been used as a novel and noninvasive method for archeology and art conservation since 1990s based on [25]. Besides the 2D spatial information, it is able to obtain the spectral information of the object such as an antique, art painting or manuscript, and hence reveals the historical and hidden information of the object. A comprehensive review for multispectral and hyperspectral imaging systems applied in archaeology and art conservation has been reported in 2012 [25]. Here we review the most updated work in the past decade.

Recently, studies on compact hyperspectral camera from SPECIM for imaging of art work have been conducted [43], [56]. In the study by Picollo et al. [43], the new commercial hyperspectral camera working from 400 to 1000 nm has been used for inspection of art works. The camera with a size of 207 × 91 × 126 mm3 has been used to analyze both indoor painting (a 19th century canvas painting), outdoor painting (Sant’ Antonino cloister at the Museum of San Marco, Florence), and a manuscript (a 15th century Florentine illuminated book in Florence). This proves the capability of the hyperspectral camera to operate effectively under different environmental conditions. The pigment identification has been achieved through the spectral angle mapper procedure embedded in the camera software. Furthermore, in the study by Daniel et al. [26], hyperspectral camera working in the same wavelength range has been used to analyze the paintings from Goya in a Spanish museum Zaragoza. Restored zoom is shown in the infrared hyperspectral image. The pigment identification has also been demonstrated in the work.

Moving beyond 1000 nm wavelength, in the study by Mahgoub et al. [42], a pushbroom hyperspectral imaging system (GILDEN Photonics) working in 1000–2500 nm has been applied to investigate an Islamic paper. A calibration model has been made for the quantitative analysis. The starch within the Islamic paper has been identified, and the cellulose degree of polymerization has been quantified, which provides information on the conservation condition of the paper. Also, in the study by Cucci et al. [57], NIR hyperspectral images of Machiavelli Zanobi painting have been obtained from 1000 to 1700 nm. From the NIR image, the restoration places can be found which is not observable from the image in visible wavelength. From the reflectance spectra, it is found that gypsum has been used as ground layer for the painting (preparatory drawing).

3 Nanophotonics-based spectral imaging and LIDAR sensing systems

Although there are many modern applications of the state-of-the-art spectral imaging and spectral LIDAR systems as mentioned in the previous section, most of these systems are still bulky, heavy and consume high power. Hence, there is an enormous demand for compact and low-cost sensing system. Nanophotonics technology [58], which is based on light–matter interaction at nanoscale dimensions, provides an ideal solution. Numerous compact optics and photonics functional devices have been demonstrated using CMOS-compatible fabrication process [59], [60], [61], [62], [63], [64], [65]. In the past decade, research works on nanophotonics-based spectral imaging and LIDAR systems have also accelerated [28], [29], [66], [67], [68], [69]. Various compact devices have been developed for the proof-of-concept demonstration in this field. In this section, we have reviewed these research works on nanophotonics-based spectral imaging and LIDAR sensing systems. Subsection 3.1 is mainly focused on the spectral imaging systems that have been demonstrated in the past decade. Subsection 3.2 is focused on the most recent research works of nanophotonics-based LIDAR systems using integrated frequency comb or supercontinnum as light source. Also, the reviewed works are categorized based on material, structure, sensing mechanism and working wavelength, as listed in Table 2.

Table 2:

Summary of nanophotonics-based spectral imaging and LIDAR sensing systems.

Material and structureSensing mechanismApplicationWavelengthReference/year
Elliptical amorphous silicon (a-Si) nanobars on fused silica substrateHyperspectral imagingImmunoglobulin G (IgG) biomolecule detection and sensing with high sensitivity760–766 nm[67]/2019
TiO2-based metasurfaceMultispectral imagingChiral beetle multispectral imaging using single metalens to resolve the chirality480, 530, and 620 nm[70]/2016
Periodic silver nanowiresMultispectral imagingTunable color filter with polarization dependent transmission for color imaging400–700 nm[28]/2017
a-Si nanoposts with rectangular cross section on silica substrateHyperspectral imagingHyperspectral imaging with compact size and light weight750–850 nm[68]/2019
Periodic silicon pillar with photonic crystal structureHyperspectral imagingCMOS-compatible, low cost and compact hyperspectral imaging system610–690 nm[29]/2019
Bayer color filter arrayMultispectral imagingMultispectral imaging for early stage pressure ulcer detection540, 577, 650, and 970 nm[71]/2011
SiN-AlN-Ag multilayer stack to form Bayer color filter arrayMultispectral imagingColor image using metal-dielectric filter patterned in Bayer array on CMOS image sensor (CIS)400–700 nm[72]/2011
Microscale plate-like SiN structureMultispectral imagingColor imaging using near-field deflection-based color splitting with minimal signal loss400–700 nm[73]/2013
Si nanowireMultispectral imagingAll-silicon multispectral imaging system in visible and NIR wavelength400–1000 nm[27]/2013
Periodic circular holes on gold (Au) layerMultispectral imagingAdaptive multispectral imaging using plasmonic spectral filter array working in LWIR8–14 μm[74]/2016
Nanohole arrays in an Au filmMultispectral imagingMultispectral imaging of methylene blue for transmission-imaging and leaf for reflection-imaging662–832 nm[75]/2013
Periodic circular holes on Al thin filmMultispectral imagingMultispectral imaging using plasmonic spectral filter array integrated with CMOS-based imager400–700 nm[76]/2013
Elliptical and circular hole arrays on Al thin filmMultispectral imagingLow-photon multispectral imagingRGB: 680, 580, and 500 nm[33]/2020
Pixelated Si-based metasurface with zigzag array structureImaging-based spectroscopyBiosensing for protein A/G5.5–7.1 μm[77]/2018
Silica wedge disk resonatorDual-comb based time of flight (ToF) LIDARDistance measurement with high accuracy1520–1580 nm[78]/2018
Si3N4 microring resonator pumped by erbium-doped fiber amplifier (EDFA)Dual-comb based LIDARDistance measurement at high speed and accuracy1420–1700 nm[34]/2018
Si3N4 microring resonator pumped by EDFAFrequency-comb based FMCW LIDARDistance and velocity measurement along a line1420–1818 nm[69]/2020
Terahertz quantum cascade lasersDual-comb based hyperspectral imagingBio-imaging3.30, 3.35, and 3.45 THz[79]/2019

3.1 Spectral imaging systems

3.1.1 Metasurface-based lens and reflectors

Flat optics or metasurface [80], [81], which can be formed by a single layer of subwavelength-scale nanostructures, has drawn a lot of research interests in the field of nanophotonics. It works based on the scattering of light by the nanostructures. These nanostructures, also called as nanoantennas, can be patterned to achieve designed spectral response or phase profile, thereby enabling varied functional devices such as lenses [82], [83], spectral filters [84], [85], wave plates [86], [87], beam deflectors [88], [89], [90] and point cloud generator [91]. For metalens, when the phase of the scattered light from nanoantennas follow the hyperboloidal profile below, the scattered light will focus at one point [81], [82]:

(1)φ=2πλx2+y2+f2f

where λ is the wavelength in free space, f is the focal length of the metalens. While for reflectors, the angle of reflected light follows the generalized Snell’s law for reflection [80]:

(2)sin(θr)sin(θi)=λ2πnidφdx

where θr and θi are reflection angle and incident angle, respectively. ni is the refractive index of the media. dφ/dx is the gradient of phase discontinuity along the reflection interface. Such phase discontinuity can be engineered to achieve the designed reflection angle of the optical beam. The integration of metasurface devices with active layers enables the active tuning and control of optics [92], [93], [94], [95], [96]. Furthermore, the metasurface can also be engineered to achieve desired dispersion [97]. Metasurface-based achromatic optical devices have been demonstrated [98], [99], [100], [101], [102], [103], which can be applied for multispectral imaging. In the study by Khorasaninejad et al. [70], a single metasurface-based lens (metalens) has been used to replace the sophisticated ensemble of optical components to achieve simultaneous imaging in two opposite circular polarization states. The schematic of the setup is shown in Figure 6(a), illustrating the imaging principle: the light with different circular polarizations from object is focused by the multispectral metalens at different physical locations for imaging purpose. The multispectral images of chiral beetle (Chrysina gloriosa), which is known for high reflectivity for only left-circularly polarized light, are also illustrated in Figure 6(b). These images are obtained by using LEDs at red, green and blue color together with a band pass filter at each wavelength. The compact multispectral imaging system reported in the study by Khorasaninejad et al. [70] should be able to obtain the helicity and spectral information from other biosamples as well. An additional point worth mentioning is that, besides the spectral information, the study by Khorasaninejad et al. [70] also illustrates that extra information of the sensing object can be obtained through the polarization of light. Hence, polarization provides one more degree of freedom in imaging in addition to the spectral information. The full-Stokes polarization imaging with compact optical system enabled by metasurface has later been demonstrated [104], [105], in which additional information including the mechanical stress of the sensing object and texture of reflecting surfaces are also revealed. A comprehensive review on recent advances of metasurface-based polarization detection has been published in the study by Intaravanne and Chen [106].

Figure 6: Metasurface-based lens and reflectors for spectral imaging.(a) Top panel: schematic illustration of the multispectral metalens imaging principle: the light from object with different circular polarizations is focused at different location by the multispectral chiral lens. (b) The beetle (Chrysina gloriosa) images formed by using red, green and blue LED illumination together with a band pass filter at each wavelength. (a) and (b) are adapted with permission from the study by Khorasaninejad et al. [70]. Direct link: https://pubs.acs.org/doi/10.1021/acs.nanolett.6b01897. Further permissions related to the material excerpted should be directed to the ACS. (c) Left panel: the schematic of the hyperspectral imaging system: the light from sample enters the system through the aperture at the top, then is reflected between the metasurfaces and gold mirrors, and exits through the transmissive metasurface at the bottom. On detector array, light with different incident angles is focused along the horizontal direction, and light with different colors is focused along the vertical direction. Right panel: the simplified schematic of the system for imaging the object. Inset shows the object of Caltech logo with mixed color, whose wavelength increases from bottom (750 nm) to top (850 nm). (d) Left panel: measured intensity profile captured by photodetector (PD) across cut A and cut B by the metasurface hyperspectral imager (M-HSI). M-HSI imaging result is benchmarked with the intensity profile obtained using a tunable laser (TL). Right panel: measured intensity at two wavelengths (770 and 810 nm) by M-HSI benchmarked with the one obtained using TL. (c) and (d) are adapted with permission from the study by Faraji-Dana et al. [68]. Direct link: https://pubs.acs.org/doi/full/10.1021/acsphotonics.9b00744. Further permissions related to the material excerpted should be directed to the ACS.
Figure 6:

Metasurface-based lens and reflectors for spectral imaging.

(a) Top panel: schematic illustration of the multispectral metalens imaging principle: the light from object with different circular polarizations is focused at different location by the multispectral chiral lens. (b) The beetle (Chrysina gloriosa) images formed by using red, green and blue LED illumination together with a band pass filter at each wavelength. (a) and (b) are adapted with permission from the study by Khorasaninejad et al. [70]. Direct link: https://pubs.acs.org/doi/10.1021/acs.nanolett.6b01897. Further permissions related to the material excerpted should be directed to the ACS. (c) Left panel: the schematic of the hyperspectral imaging system: the light from sample enters the system through the aperture at the top, then is reflected between the metasurfaces and gold mirrors, and exits through the transmissive metasurface at the bottom. On detector array, light with different incident angles is focused along the horizontal direction, and light with different colors is focused along the vertical direction. Right panel: the simplified schematic of the system for imaging the object. Inset shows the object of Caltech logo with mixed color, whose wavelength increases from bottom (750 nm) to top (850 nm). (d) Left panel: measured intensity profile captured by photodetector (PD) across cut A and cut B by the metasurface hyperspectral imager (M-HSI). M-HSI imaging result is benchmarked with the intensity profile obtained using a tunable laser (TL). Right panel: measured intensity at two wavelengths (770 and 810 nm) by M-HSI benchmarked with the one obtained using TL. (c) and (d) are adapted with permission from the study by Faraji-Dana et al. [68]. Direct link: https://pubs.acs.org/doi/full/10.1021/acsphotonics.9b00744. Further permissions related to the material excerpted should be directed to the ACS.

Also, contributed by the capability of dispersion control with metasurface, in the study by Faraji-Dana et al. [68], a line-scanned hyperspectral imager with a single-layer metasurface, which is patterned by a single-step lithography process on glass substrate, has been demonstrated, with schematic shown in Figure 6(c) left panel. The imaging system is based on a compact folded metasurface platform [107]. The light from the object enters the system from an aperture at the top, reflects between the metasurfaces and gold mirrors, and finally exits from the transmissive metasurface at the bottom for image forming. The imaging system is designed to disperse the light with different wavelengths in vertical direction. The light with different incident angles along the horizontal direction is focused horizontally at the detector array. The Caltech logo has been used for imaging as a proof-of-concept demonstration, with simplified setup schematic shown in Figure 6(c) right panel. The inset shows the colored Caltech logo with wavelength increasing from bottom (750 nm) to top (850 nm). The imaging results are illustrated in Figure 6(d). The left panel shows the intensity profile obtained by metasurface hyperspectral imager (M-HSI) along cut A and cut B. The result is benchmarked with the one obtained by a tunable laser (TL), and shows a good match. The intensity of the two wavelengths 770 and 810 nm obtained by M-HSI are shown in Figure 6(d) right panel. It is also compared with the result obtained by TL, showing a good match.

3.1.2 Spectral filters integrated with photodetector array or CMOS camera

Besides the metasurface-based lens and reflectors used in the earlier works, the spectral filters made from flat optics can also be placed on top of a photodetector (PD) or a CMOS image sensor (CIS) for spectral imaging. A recent work to integrate the spectral filter with a PD array has been reported in the study by Shah et al. [33]. In this work, a plasmonic metasurface-based color filter with elliptical and circular nanoholes are defined in a thin Aluminum (Al) layer. To achieve different resonance wavelengths in visible range, the dimension of these sub-wavelength scale nanoholes are varied, as shown in Figure 7(a) where the scanning electron microscopy (SEM) images of the nanostructures are illustrated. The inset of the figure shows the micrograph of the color filter. The filter array is patterned by a single-step electron beam lithography. Once the fabrication is completed, the filter array is then integrated with a 64 × 64 single photon avalanche photodetector (SPAD) array through a flip-chip bonding process. Hence, the imaging system has the capability of counting at single photon level. The optical images of the system are illustrated in Figure 7(b). Each filter covers one of the 64 × 64 pixels of SPAD, with red, green and blue colors randomly distributed. Each color has approximately equal quantity of 33.33%. The active imaging system utilizes a supercontinuum tunable laser source, whose schematic is shown in Figure 7(c). As a proof-of-concept demonstration, the color image of a sample target taken by conventional camera and the reconstructed image obtained from the multispectral imaging system are illustrated in Figure 7(d) left and right panel, respectively. The sensing system demonstrated in this work can find applications in LIDAR-based 3D imaging.

Figure 7: Metasurface-based plasmonic color filters integrated with single photon avalanche photodetector (SPAD) array for spectral imaging.(a) Scanning electron microscopy (SEM) images of the fabricated color filters, with inset illustrating the micrographs of blue-, green- and red-colored filter. (b) Optical image of color filters integrated with SPAD array. (c) Schematic of the imaging system, including the supercontinuum tunable laser as light source. (d) Left panel: sample target used for multispectral imaging. Right panel: reconstructed multispectral image of the sample target. (a)–(d) are adapted with permission from the study by Shah et al. [33]. Licensed under a Creative Commons Attribution.
Figure 7:

Metasurface-based plasmonic color filters integrated with single photon avalanche photodetector (SPAD) array for spectral imaging.

(a) Scanning electron microscopy (SEM) images of the fabricated color filters, with inset illustrating the micrographs of blue-, green- and red-colored filter. (b) Optical image of color filters integrated with SPAD array. (c) Schematic of the imaging system, including the supercontinuum tunable laser as light source. (d) Left panel: sample target used for multispectral imaging. Right panel: reconstructed multispectral image of the sample target. (a)–(d) are adapted with permission from the study by Shah et al. [33]. Licensed under a Creative Commons Attribution.

A recent demonstration for integration of dielectric metasurface-based spectral filter with a CIS is also reported in the study by Yesilkoy et al. [67]. In this work, a biomolecule sensor has been demonstrated based on hyperspectral images taken by high quality factor dielectric metasurface integrated on a CIS. The sensing mechanism is illustrated in Figure 8(a). A tunable narrow-band laser source, formed by a tunable filter coupled to a supercontinuum light source, is used to illuminate the metasurface with immunoglobulin G (IgG) solutions. The spectral information of each pixel can be obtained from the hyperspectral data cube, as shown in bottom right panel of Figure 8(a). The resonance wavelength shift of the metasurface induced by the IgG molecules can be obtained by comparing the spectral information to the reference without the IgG molecule. Higher IgG biomolecule concentration contributes to larger refractive index change and hence larger resonance wavelength shift of the metasurface. In this way, the concentration information of the IgG can be obtained. Figure 8(b) shows the schematic of the bioassay for IgG biomolecule binding/immobilization process. The mean resonance wavelength shifts with respect to different IgG concentrations have been plotted in Figure 8(c).

Figure 8: Dielectric metasurface-based hyperspectral imaging for ultrasensitive biomolecule detection.(a) Schematic of dielectric metasurface-integrated CIS for hyperspectral imaging-based biosensing. Narrow-band tunable laser source is used for illumination. CMOS camera captures image for each wavelength and the data forms hyperspectral data cube. For each pixel, the resonance wavelength can be obtained from the processed spectral information. Biosensing is achieved by comparing the resonance map of metasurface with biomolecules and the reference resonance map without the biosample. (b) Schematic showing the immobilization process of the biomolecules for sensing purpose. (c) Mean resonance shift with respect to average number of IgG molecules. (a)–(c) are adapted with permission from Springer Nature, Nature Photonics [67]. Ultrasensitive hyperspectral imaging and biodetection enabled by dielectric metasurfaces, Yesilkoy, et al. Copyright 2019.
Figure 8:

Dielectric metasurface-based hyperspectral imaging for ultrasensitive biomolecule detection.

(a) Schematic of dielectric metasurface-integrated CIS for hyperspectral imaging-based biosensing. Narrow-band tunable laser source is used for illumination. CMOS camera captures image for each wavelength and the data forms hyperspectral data cube. For each pixel, the resonance wavelength can be obtained from the processed spectral information. Biosensing is achieved by comparing the resonance map of metasurface with biomolecules and the reference resonance map without the biosample. (b) Schematic showing the immobilization process of the biomolecules for sensing purpose. (c) Mean resonance shift with respect to average number of IgG molecules. (a)–(c) are adapted with permission from Springer Nature, Nature Photonics [67]. Ultrasensitive hyperspectral imaging and biodetection enabled by dielectric metasurfaces, Yesilkoy, et al. Copyright 2019.

The CIS mentioned above has been widely applied due to its advantages including compact size, low cost, low power consumption and ease of integration with other CMOS-based functional devices. Rather than using CIS and spectral filter as separate components, the nanophotonics-based spectral filters can be integrated with CCD or CIS, either through attaching on the image sensor [27], [29], [73], [76] or direct patterning on the image sensor [72], [108], [109], [110]. The compact integrated system is a suitable platform for spectral imaging. In the study by Park and Crozier [27], a compact multispectral imaging system has been demonstrated by using color filters formed by vertical silicon nanowires. The color spectrum is varied by the nanowire diameter. The nanostructures are patterned by a single-step electron beam lithography process. The nanowires embedded in polydimethylsiloxane (PDMS) is attached to a Si-based image sensor, with schematic and optical image shown in Figure 9(a) and (b), respectively. The zoomed-in image of the fabricated nanowire filter array is included at the bottom of Figure 9(b). In the imaging system, there are five channels in visible range and three channels in infrared (IR) wavelength range. In Figure 9(c), left and right panels show the image of Macbeth color chart obtained using conventional camera and three channels of this multispectral imaging system in visible range, respectively. The colors show good match. Furthermore, the advantage of IR channel is demonstrated using the experiment setup shown in Figure 9(d). A donut-shaped object is placed at the back of a black screen (glass painted with black ink), which is opaque in visible wavelength, but transparent in IR range. The images obtained by the system in visible and IR wavelength range are shown in Figure 9(e) middle and right panel, respectively. From the IR image, the donut-shaped object can be observed. For comparison, the image taken by a conventional camera in visible wavelength range is shown in Figure 9(e) left panel, where the cross-shape object in front of the screen can be clearly observed, while the donut-shaped object at the back of the screen is hardly seen.

Figure 9: Si nanowire-based spectral filter integrated with CCD image sensor for multispectral imaging in visible and infrared (IR) wavelength range.(a) Schematic of the multispectral imaging system, with inset showing the Si nanowire structure as spectral filter. (b) Optical image of the spectral filter mounted on CCD image sensor, with zoom-in image of the filter area at the bottom panel. Inset of the bottom panel shows the magnified image of the filter array. (c) Image of Macbeth color chart taken by conventional color camera (left panel) in comparison with the image taken by the nanowire-based multispectral imaging system (right panel). (d) Schematic of imaging setup using white light and IR LED as light source to demonstrate the advantage of multispectral imaging. (e) Images taken by conventional camera (left), nanowire-based imaging system in visible wavelength range (middle), and nanowire-based imaging system in IR wavelength range (right). The donut-shaped object at the back of the black ink painted glass is invisible or hard to observe from the image in visible wavelength, but can be observed from the image in IR wavelength. (a)–(e) are adapted with permission from the study by Park and Crozier [27]. Licensed under a Creative Commons Attribution.
Figure 9:

Si nanowire-based spectral filter integrated with CCD image sensor for multispectral imaging in visible and infrared (IR) wavelength range.

(a) Schematic of the multispectral imaging system, with inset showing the Si nanowire structure as spectral filter. (b) Optical image of the spectral filter mounted on CCD image sensor, with zoom-in image of the filter area at the bottom panel. Inset of the bottom panel shows the magnified image of the filter array. (c) Image of Macbeth color chart taken by conventional color camera (left panel) in comparison with the image taken by the nanowire-based multispectral imaging system (right panel). (d) Schematic of imaging setup using white light and IR LED as light source to demonstrate the advantage of multispectral imaging. (e) Images taken by conventional camera (left), nanowire-based imaging system in visible wavelength range (middle), and nanowire-based imaging system in IR wavelength range (right). The donut-shaped object at the back of the black ink painted glass is invisible or hard to observe from the image in visible wavelength, but can be observed from the image in IR wavelength. (a)–(e) are adapted with permission from the study by Park and Crozier [27]. Licensed under a Creative Commons Attribution.

Besides the abovementioned metasurface-based device integration with image sensor, in the study by Yokogawa et al. [111], the plasmonic-based color filter array designed for CIS has also been demonstrated, which has potential application for spectral imaging. The color filters are formed by hole arrays on 150-nm thick Al film to work in visible wavelength range. Based on the same material platform, in the study by Burgos et al. [76], the plasmonic color filter array has been integrated with a CIS to demonstrate a plasmonic-based full-color imaging functionality. The schematic of the CIS with RGB plasmonic-based filters on top is shown in Figure 10(a) left panel. The SEM image of the filters, the optical image of the filter array on quartz substrate, and the integrated CIS have been included in Figure 10(a) right panel. The Al nanostructure is patterned using a single-step electron beam lithography followed by a lift-off process on quartz substrate. The fabricated structure is then integrated with the CIS through a contact process. The reconstructed image of a 24-patch Macbeth color chart obtained from the integrated system is shown in Figure 10(b) right panel, showing good match with the image taken by conventional CMOS camera shown in Figure 10(b) left panel.

Figure 10: Metallic and dielectric nanophotonics spectral filters integrated with CMOS sensor for spectral imaging.(a) Schematic of hole array-based RGB spectral filter integrated with CIS, with inset showing the SEM image of the filter array, optical images of the filter array patterned on quartz substrate, and the optical image of the integrated system. (b) Image of Macbeth color chart taken by conventional CMOS camera (left panel) and the plasmonic-based CMOS camera (right panel) for comparison. (a)–(b) are adapted with permission from the study by Burgos et al. [76]. Copyright © 2013 American Chemical Society. (c) Schematic of microspectrometer consisting photonic-crystal array integrated with CMOS sensor array. (d) Measured optical spectrum from a narrow-band light source centered at 581–584 nm by the integrated spectrometer. The measured results plotted in blue circles are benchmarked with the ground-truth data plotted in red solid line, showing good match. (e) Left panel: hyperspectral imaging setup using photonic-crystal-based spectral imaging system. The target on the screen is a superposition of “5” and “9” encoded using different wavelengths. Right panel: the captured images at different wavelengths, where “5” and “9” can be distinguished at wavelength of 610 and 670 nm, respectively. (c)–(e) are adapted with permission from the study by Wang et al. [29]. Licensed under a Creative Commons Attribution.
Figure 10:

Metallic and dielectric nanophotonics spectral filters integrated with CMOS sensor for spectral imaging.

(a) Schematic of hole array-based RGB spectral filter integrated with CIS, with inset showing the SEM image of the filter array, optical images of the filter array patterned on quartz substrate, and the optical image of the integrated system. (b) Image of Macbeth color chart taken by conventional CMOS camera (left panel) and the plasmonic-based CMOS camera (right panel) for comparison. (a)–(b) are adapted with permission from the study by Burgos et al. [76]. Copyright © 2013 American Chemical Society. (c) Schematic of microspectrometer consisting photonic-crystal array integrated with CMOS sensor array. (d) Measured optical spectrum from a narrow-band light source centered at 581–584 nm by the integrated spectrometer. The measured results plotted in blue circles are benchmarked with the ground-truth data plotted in red solid line, showing good match. (e) Left panel: hyperspectral imaging setup using photonic-crystal-based spectral imaging system. The target on the screen is a superposition of “5” and “9” encoded using different wavelengths. Right panel: the captured images at different wavelengths, where “5” and “9” can be distinguished at wavelength of 610 and 670 nm, respectively. (c)–(e) are adapted with permission from the study by Wang et al. [29]. Licensed under a Creative Commons Attribution.

Also, using the same contacting/assembly approach, the photonic crystal has been implemented as spectral filter for spectral imaging. In the study by Wang et al. [29], a compact on-chip spectrometer with hyperspectral imaging functionality has been demonstrated, whose schematic is shown in Figure 10(c). The photonic crystal array is fabricated and then attached on top of the CMOS sensor array. The photonic crystal dimension for each slab is varied to achieve different resonance frequency. The spectrum of light source can hence be reconstructed. Figure 10(d) shows some reconstructed optical spectrum plotted in blue circles well matched with the reference ground-truth spectrum plotted in red solid line. The hyperspectral functionality has also been demonstrated, as shown in Figure 10(e). Two numbers “5” and “9” are illuminated by light source at 610 and 670 nm, respectively. The target on the screen is a superposition of “5” and “9” encoded using these two wavelengths. The hyperspectral image stack shown on right panel of Figure 10(e) is able to distinguish two numbers at different wavelengths, which is not distinguishable using the conventional RGB camera. An additional note from the authors of this work is that the pixel number in this hyperspectral imager is limited by the photonic crystal area size patterned by electron beam lithography. The limitation can be overcome by using high throughput photolithography patterning technology [112], [113]. In the study by Stewart et al. [114], ultraviolet (UV) photolithography has been used to pattern the large-area silver (Ag) nanocubes on gold (Au) layer. The nanostructure dimensions have been varied to form plasmonic-based filter array. The resonance wavelength covers from 580 nm up to 1125 nm. The device has also been patterned to form a color image of a rainbow lorikeet. Also, in the studies by Xu et al. [115], [116], the band pass filters have been patterned using 12-inch 193-nm deep UV immersion lithography followed by an inductive coupled plasma etching process. The filters work in both SWIR and visible wavelength range. The fabrication process is CMOS-compatible, which enables these filters to be integrated with the CIS, through either monolithic integration or packaging process.

Similar to contacting/assembly approach, the study by Qi et al. [71] reports a hand-held multispectral imaging device for biomedical application. TiO2-Al2O3-Ag multilayer structure has been fabricated on glass substrate to form Bayer color filter array. The mosaic filter is then laminated on a CIS [117], [118] as a multispectral imaging device for early stage pressure ulcer detection. Furthermore, different from the contacting approach mentioned earlier, in the study by Chen et al. [108], the Al-based plasmonic color filter array is directly patterned on the top surface of CIS, using electron beam lithography followed by a dry etching process. This direct patterning approach reduces the complexity of system integration and packaging.

3.1.3 Novel standalone spectral filters for spectral imaging

In the previous subsections, the nanostructured spectral filters integrated with PD array or CISs are covered. In the meanwhile, there are also novel standalone spectral filters demonstrated recently, which are suitable for spectral imaging [28], [75], [119], [120], [121], [122], [123]. For example, in the study by Najiminaini et al. [75], nanohole arrays in a gold film has been used as spectral filter for a snapshot multispectral imager. The schematic of the filter is shown in Figure 11(a). In the schematic, the mosaic filter is composed of 4 × 4 block array with each block consisting of 3 × 3 nanohole array. The resonance wavelength of the nanostructure is adjusted through the period variation of the hole array. The resonance cavity is formed beneath the hole through wet-etching process. These etched holes are shown in the SEM image as the inset of Figure 11(a). The multispectral images are taken using the setup shown in Figure 11(b) in both transmission and reflection modes. The mosaic filter with 20 × 20 block array has been implemented in front of a CMOS camera mounted with an objective lens for imaging purpose. The multispectral transmission images of methylene blue (MB+) with different concentrations are illustrated in Figure 11(c). From the spectral band of 662–714 nm, it can be observed that the image intensity is decreasing as the MB+ solution concentration is increasing. The low intensity at the spectral band of 788–832 nm is due to the existence of a low pass filter (<800 nm) within the imaging test system.

Figure 11: Multispectral imaging using nanohole array.(a) Schematic of mosaic filter formed by nanohole array (NHA). Inset shows the SEM image of the resonance cavity formed by etching through the gold layer. Resonance wavelength variation is achieved by adjusting the NHA period. (b) Multispectral imaging setup working in both transmission and reflection mode. NHA-based mosaic filters with 20 × 20 blocks are implemented in front of the CMOS camera. (c) Raw images of NHA and multispectral images of methylene blue (MB+) under different concentrations (10, 30, and 50 μM). (a)–(c) are adapted with permission from [75]. Licensed under a Creative Commons Attribution.
Figure 11:

Multispectral imaging using nanohole array.

(a) Schematic of mosaic filter formed by nanohole array (NHA). Inset shows the SEM image of the resonance cavity formed by etching through the gold layer. Resonance wavelength variation is achieved by adjusting the NHA period. (b) Multispectral imaging setup working in both transmission and reflection mode. NHA-based mosaic filters with 20 × 20 blocks are implemented in front of the CMOS camera. (c) Raw images of NHA and multispectral images of methylene blue (MB+) under different concentrations (10, 30, and 50 μM). (a)–(c) are adapted with permission from [75]. Licensed under a Creative Commons Attribution.

Another example is reported in the study by Xu et al. [120]: a plasmonic-based color filter array in transmission mode has been demonstrated by using metal-insulator-metal structure. The plasmonic nanoresonators are formed by 100-nm-thick ZnSe layer sandwiched between two Al layers with 40 nm thickness on an MgF2 substrate. Also, in the study by Duempelmann et al. [28], a compact plasmonic-based color filter formed by periodic silver nanowires has been designed and used for multispectral imaging. The filter is implemented in front of a camera, with setup schematic shown in Figure 12(a). The spectral filter is polarization sensitive, and the transmission spectrum can be tuned by rotating the filter orientation/polarization, which is different from the conventional spectral scanning approach. This approach brings in the advantages of lower fabrication cost and more compactness. The spectrum of some fruits have been included in Figure 12(a) bottom part. These spectra are used to reconstruct the multispectral image as shown in the same figure. It can be observed that the colors of the fruits have been well reconstructed. The spectrum recording capability of the system is verified by measuring and recording the laser light spectrum as shown in inset of Figure 12(a). Alternatively, colloidal quantum dots have been used as broadband color filters within a spectrometer working in visible wavelength [124]. The compact system is able to capture data for reconstruction of the optical spectrum.

Figure 12: Standalone spectral filters for spectral imaging system.(a) Schematic of the setup for multispectral imaging using a tunable plasmonic-based color filter in front of an imaging camera. The filter property is tuned by polarization angle. Multispectral image and the spectra recorded at selected area are included at bottom part. (a) is adapted with permission from the study by Duempelmann et al. [28]. Copyright © 2017, American Chemical Society. (b) Cross section of vertically integrated tunable optical sensor. (c) Active tuning of sensor spectral responsivity by increasing reverse bias voltage. (b)–(c) are adapted with permission from the study by Jovanov et al. [125]. Copyright © 2018, American Chemical Society. (d) Protein absorption spectral measurement and spatial mapping. Absorption spectral is obtained by taking the difference between the normalized reflectance spectral with and without protein A/G. The spectral integration enables the translation from absorption signature into 2D absorption map as molecular barcode of protein (bottom right panel). (d) is adapted from the study by Tittl et al. [77]. Reprinted with permission from AAAS.
Figure 12:

Standalone spectral filters for spectral imaging system.

(a) Schematic of the setup for multispectral imaging using a tunable plasmonic-based color filter in front of an imaging camera. The filter property is tuned by polarization angle. Multispectral image and the spectra recorded at selected area are included at bottom part. (a) is adapted with permission from the study by Duempelmann et al. [28]. Copyright © 2017, American Chemical Society. (b) Cross section of vertically integrated tunable optical sensor. (c) Active tuning of sensor spectral responsivity by increasing reverse bias voltage. (b)–(c) are adapted with permission from the study by Jovanov et al. [125]. Copyright © 2018, American Chemical Society. (d) Protein absorption spectral measurement and spatial mapping. Absorption spectral is obtained by taking the difference between the normalized reflectance spectral with and without protein A/G. The spectral integration enables the translation from absorption signature into 2D absorption map as molecular barcode of protein (bottom right panel). (d) is adapted from the study by Tittl et al. [77]. Reprinted with permission from AAAS.

An additional note worth mentioning is that an integrated multispectral sensor with the PD, working as a tunable filter has also been demonstrated in the study by Jovanov et al. [125]. The spectral responsivity of the sensor can be actively tuned by varying the reverse bias on the vertical diode structure. The schematic of the sensors with a reflecting layer is shown in Figure 12(b). The measured spectral responsivity of the vertically integrated sensor with the interlayer under different reverse bias are shown in Figure 12(c). Also, in the study by Mauser et al. [126], a tunable PD has been demonstrated using subwavelength scale thermoelectric nanoresonators for wavelength selective absorption. Its absorption wavelength covers from visible to MIR range.

Going beyond visible and NIR wavelength regime, MIR wavelength regime is interesting in scientific research since the fingerprints of most chemical molecules fall into this wavelength range. Plasmonic-based multispectral filters have been demonstrated using hole arrays in Au layer [74], [127]. In the study by Jang et al. [74], an MIR multispectral imaging system has been demonstrated, using plasmonic color filter array. The reconstruction of irradiance from a blackbody IR source and a metal ring object have been achieved. Also, in the study by Tittl et al. [77], a pixelated metasurface-based filter array has been used in an imaging system to detect the molecular fingerprint in MIR wavelength regime. The pixelated metasurface formed by dielectric nanostructures are designed to have resonance peak located in MIR. The biosample (protein A/G) attached to the metasurface, as shown in Figure 12(d), will have absorption at its fingerprint wavelength. By comparing the reflectance difference between the metasurface with and without biosample, the protein absorption signature can be obtained, as illustrated in Figure 12(d) bottom left panel. A spectral integration process enables the translation from absorption signature into 2D absorption map as molecular barcode of the biosample, as illustrated in bottom right panel of Figure 12(d). A possible configuration to integrate the metasurface spectral filter with broadband IR detector array has also been provided in the study by Tittl et al. [77].

3.2 Frequency comb/supercontinuum-based LIDAR sensors

There are a lot of recent research works and progress in ultrafast photonics, nonlinear optics and optical soliton generation [128], [129], [130], [131], [132], [133], [134], [135], [136], [137]. The optical frequency comb, which leverages on the nonlinear optical effects (e.g., Kerr effect and self-phase modulation), has also drawn a lot of attention. The frequency comb generation originates from the nonlinear interaction between high-intensity optical field and material, which also can be a platform for optical soliton generation [138], [139], [140], [141], [142]. The optical frequency comb is a powerful tool in precision measurement [143], spectroscopy [144], optical frequency synthesis [145], [146], [147], distance measurement [148], microwave photonics [141], [149] and many other applications [150], [151], [152]. The photonic-integrated optical frequency comb, which is able to provide a compact solution to a sensing system, has drawn a lot of research interests [153], [154], [155], [156], [157]. Recently, the optical frequency comb generated by a compact chip-scale optical resonator has been used as light source for LIDAR systems [34], [69], [78]. The microresonators, which generally have high optical Q value for nonlinear optics generation, are fabricated by the state-of-the-art nanofabrication technology [158], and can also be integrated with other photonic components such as optical phased array to enable beam steering [159], [160], [161].

A frequency-modulated continuous wave (FMCW) LIDAR has been used to demonstrate the simultaneous measurement of multiple points along a line in the study by Riemensberger et al. [69]. The optical frequency comb, which is generated through four wave mixing (FWM) in a high Q silicon nitride (Si3N4) microring resonator, has been used as the light source for the measurement. The output signal frequency modulation is achieved through the pump modulation. The line is formed through the spatial spread of the frequency comb signal by using a dispersive optical component. The parallel measurement of velocity and distance along the line has been demonstrated, with concept schematic and working principle illustrated in Figure 13(a) top and bottom panels, respectively. Within the concept schematic, the modulation signal from an arbitrary functional generator (AFG) is modulated into the pump source through an electro-optic modulator (EOM). The modulated pump is coupled into Si3N4 microring for frequency comb generation through FMW. The generated frequency comb lines are spread spatially by a dispersive optical component to achieve parallel sensing. The reflected signal containing the distance and velocity information of the object enters circulator (CIRC) to beat with the original signal. The beat signal is then split into separate comb line through a demultiplexer (DEMUX), and converted into the electrical signal through a PD for further signal processing to obtain the distance and velocity information at each point along the line. The plot of the modulated signal, Doppler-shifted reflected signal have also been illustrated in Figure 13(a). The formula for calculating the distance (D) and velocity (V) [69] are provided in Figure 13(a) and are also listed below:

(3)D=cT4Bfu+fd2
(4)V=c2fccosθfufd2

where T is the modulation period, B is the bandwidth, and c is the speed of light. fu and fd represents the beat frequency for upward and downward laser scan, respectively. Both have taken the Doppler shift into account. fc is the optical carrier frequency, θ is the angle between the object moving direction and the reflected optical beam propagation direction. Based on the ranging formula mentioned above, 3D image of the “EPFL” logo has been formed by sweeping the line using a scanning mirror, with setup schematic shown in Figure 13(b) left panel. The measurement target is formed by two pieces of papers placed with a certain distance in between. The front paper has been partially cut with “EPFL” logo pattern. The 3D image (distance profile) of the “EPFL” logo pattern is illustrated in Figure 13(b) right panel, as a proof-of-concept demonstration.

Figure 13: Optical frequency comb-based FMCW LIDAR for parallel sensing: principle, setup, and measurement result.(a) Top panel: Conceptual schematic of parallel FMCW LIDAR setup. The pump source is modulated by electro-optic modulator (EOM). The modulation signal is provided by an arbitrary functional generator (AFG). The modulated pump signal is injected into Si3N4 microring for frequency comb generation. The inset shows the SEM image of integrated Si3N4 micro-ring structure. Frequency comb lines are spread spatially by a dispersive optical component for parallel sensing along a line. The reflected signal enters circulator (CIRC) to beat with the original signal. The beat signal is split into separate comb line through a de-multiplexer (DEMUX), and converted into the electrical signal through a PD. Further signal processing are carried out to obtain the distance and velocity information at each point along the line. Bottom panel: Principle of FMCW LIDAR with each optical frequency comb line modulated for parallel sensing. (b) Left panel: experimental setup for parallel distance measurement. An external scanning mirror is used for vertical scanning of the frequency comb signal. The target is formed by two pieces of papers. The front paper has been partially cut with “EPFL” logo. Right panel: 3D image plot based on the distance profile measured from the target. (a) and (b) are adapted with permission from Springer Nature, Nature [69]. Massively parallel coherent laser ranging using a soliton microcomb, by Riemensberger et al. Copyright 2020.
Figure 13:

Optical frequency comb-based FMCW LIDAR for parallel sensing: principle, setup, and measurement result.

(a) Top panel: Conceptual schematic of parallel FMCW LIDAR setup. The pump source is modulated by electro-optic modulator (EOM). The modulation signal is provided by an arbitrary functional generator (AFG). The modulated pump signal is injected into Si3N4 microring for frequency comb generation. The inset shows the SEM image of integrated Si3N4 micro-ring structure. Frequency comb lines are spread spatially by a dispersive optical component for parallel sensing along a line. The reflected signal enters circulator (CIRC) to beat with the original signal. The beat signal is split into separate comb line through a de-multiplexer (DEMUX), and converted into the electrical signal through a PD. Further signal processing are carried out to obtain the distance and velocity information at each point along the line. Bottom panel: Principle of FMCW LIDAR with each optical frequency comb line modulated for parallel sensing. (b) Left panel: experimental setup for parallel distance measurement. An external scanning mirror is used for vertical scanning of the frequency comb signal. The target is formed by two pieces of papers. The front paper has been partially cut with “EPFL” logo. Right panel: 3D image plot based on the distance profile measured from the target. (a) and (b) are adapted with permission from Springer Nature, Nature [69]. Massively parallel coherent laser ranging using a soliton microcomb, by Riemensberger et al. Copyright 2020.

In addition, an alternative modulation mechanism, which makes use of the piezoelectric effect of AlN and the acoustic-optic coupling on nanophotonics integrated system [162], [163], [164], has been implemented for optical signal modulation in the study by Liu et al. [139]. It provides an integrated modulation scheme to demonstrate FMCW LIDAR system. Besides FMCW LIDAR, a time of flight (ToF)-based LIDAR system is demonstrated to obtain 3D image from the distance information in [165]. In this system, a spatial dispersion component is placed after the multispectral light source to achieve fast spatial scanning (MHz rate) by tuning the frequency/wavelength of the source, which is a fiber-based supercontinuum source cascaded with an arrayed waveguide grating. A photonic-integrated supercontinuum light source [166], [167] can also be used in the system to replace the fiber-based version.

The LIDAR systems discussed above use either single frequency comb or supercontinuum source. Dual comb LIDAR systems have also been demonstrated recently. In the study by Suh and Vahala [78], a dual-comb based ToF LIDAR system has been used for distance measurement with 200 nm precision. The ToF measurement setup is illustrated in Figure 14(a). The pump, amplified by an erbium-doped fiber amplifier (EDFA), is split into 50/50 to couple into the microdisk resonator in clock-wise (CW) and counter clock-wise (CCW) directions. To reach soliton status, an acoustic-optic modulator (AOM) and a polarization controller (PC) are used in each arm to tune the pump and to ensure efficient pump coupling into the microdisk resonator, respectively. After the microdisk resonator, the residual pump is attenuated by Fiber Bragg grating (FBG). The spectra of optical and electrical signal are monitored by optical spectrum analyzer (OSA) and electric signal analyzer (ESA), respectively. The generated soliton is stabilized through a servo feedback loop. In this configuration, the dual combs, generated by the same microresonator and formed by CW and CCW propagation solitons, are shown in Figure 14(b) top and bottom panel, respectively. For ToF LIDAR measurement, the CW soliton signal is split by 50/50 splitter, as indicated in orange and green colored dashed arrow in Figure 14(a). These two signals will be combined with CCW soliton signal, as indicated in blue dashed arrow, to generate an interferogram. The two measured distances with respect to time using CW and CCW frequency comb to probe the target are illustrated in Figure 14(c) left and right panels, respectively. The distance difference between the two measurements is 16.02 μm, and can be used to determine the absolute range. Histogram of the range measurement are plotted with Gaussian fitting.

Figure 14: Dual comb-based LIDAR sensing: measurement setup and results.(a) Experimental setup for dual-comb generation and LIDAR measurement. A pump laser source is amplified by an erbium-doped fiber amplifier (EDFA), followed by a 50/50 splitter to couple into two arms to pump the microdisk resonator in clockwise (CW) and counter clockwise (CCW) directions. Acoustic-optic modulator (AOM) and polarization controller (PC) are used in each arm to tune the pump for soliton generation and ensure efficient coupling into the microdisk resonator, respectively. Fiber Bragg grating (FBG) is used to attenuate the residual pump. Optical signal in spectrum and time domain is monitored by optical spectrum analyzer (OSA) and oscilloscope, respectively. Electric signal is monitored in spectrum domain by electric signal analyzer (ESA). A servo feedback loop is used to lock the pump to the microdisk resonator and hence to stabilize the comb. The CW soliton signal is split into two by 50/50 splitter for distance measurement, as indicated in orange and green colored dashed arrow. These two signals will be combined with CCW soliton signal (blue dashed arrow) to generate interferogram. (b) Optical spectral of CW and CCW soliton. (c) Range data with respect to time using CW and CCW soliton to probe the target. Difference of 16.02 μm gives the absolute range. Histogram of the range measurement are plotted with Gaussian fitting, and the standard deviations (SD) are calculated. (a)–(c) are adapted from the study by Suh and Vahala [78]. Reprinted with permission from AAAS. (d) Left: schematic setup of dual-comb-based ultrafast LIDAR measurement. DKS-dissipative Kerr soliton. Middle: Optical image of the static bullet. Right: measured bullet profile using dual-DKS-comb in dynamic status, and benchmarked with the profile measured using swept-source optical coherence tomography (OCT) in static status. (d) is adapted from the study by Trocha et al. [34]. Reprinted with permission from AAAS.
Figure 14:

Dual comb-based LIDAR sensing: measurement setup and results.

(a) Experimental setup for dual-comb generation and LIDAR measurement. A pump laser source is amplified by an erbium-doped fiber amplifier (EDFA), followed by a 50/50 splitter to couple into two arms to pump the microdisk resonator in clockwise (CW) and counter clockwise (CCW) directions. Acoustic-optic modulator (AOM) and polarization controller (PC) are used in each arm to tune the pump for soliton generation and ensure efficient coupling into the microdisk resonator, respectively. Fiber Bragg grating (FBG) is used to attenuate the residual pump. Optical signal in spectrum and time domain is monitored by optical spectrum analyzer (OSA) and oscilloscope, respectively. Electric signal is monitored in spectrum domain by electric signal analyzer (ESA). A servo feedback loop is used to lock the pump to the microdisk resonator and hence to stabilize the comb. The CW soliton signal is split into two by 50/50 splitter for distance measurement, as indicated in orange and green colored dashed arrow. These two signals will be combined with CCW soliton signal (blue dashed arrow) to generate interferogram. (b) Optical spectral of CW and CCW soliton. (c) Range data with respect to time using CW and CCW soliton to probe the target. Difference of 16.02 μm gives the absolute range. Histogram of the range measurement are plotted with Gaussian fitting, and the standard deviations (SD) are calculated. (a)–(c) are adapted from the study by Suh and Vahala [78]. Reprinted with permission from AAAS. (d) Left: schematic setup of dual-comb-based ultrafast LIDAR measurement. DKS-dissipative Kerr soliton. Middle: Optical image of the static bullet. Right: measured bullet profile using dual-DKS-comb in dynamic status, and benchmarked with the profile measured using swept-source optical coherence tomography (OCT) in static status. (d) is adapted from the study by Trocha et al. [34]. Reprinted with permission from AAAS.

Also, in the study by Trocha et al. [34], a dual-comb LIDAR has been implemented for distance measurement with high accuracy and speed. The measurement accuracy reaches 188 nm. The distance acquisition rate achieved is 96.4 MHz, which is contributed by the difference in line spacing of the signal and local oscillator combs. As a proof-of-concept demonstration, the system, using dissipative Kerr soliton (DKS) as probe signal, has been applied to measure the profile of a flying bullet from an air gun with a speed of 150 m/s. The measurement setup, optical image of the static bullet, and the measured bullet profile are shown in Figure 14(d) left, middle and right panels, respectively. The measured flying bullet profile using dual-DKS comb as probe is benchmarked with the static profile measured using swept-source optical coherence tomography (OCT) system, as illustrated in Figure 14(d) right panel. The slight discrepancy at the end of the bullet is contributed by the corrugation of the bullet at the end, as can be visualized in Figure 14(d) middle panel.

Based on the abovementioned literature studies, it can be observed that in nanophotonics-based frequency comb applied in LIDAR, the multispectral property of the light source has been used for parallel measurement of the velocity and distance information along a line with single comb source [69], or high precision measurement and high-speed ranging with dual comb source [34], [78]. To the best of our knowledge, there is no work on nanophotonics-based frequency comb source applied in LIDAR system to obtain the spectral information of the sensing object. Based on the fact that nanophotonics-based integrated frequency comb has compact size and low-power consumption [142], [168], and also considering the multi-wavelength nature of the frequency comb, we anticipate that nanophotonics-based frequency comb can be applied in LIDAR system for compact multispectral sensing in the near future. A side note worth mentioning is that a hyperspectral imaging system has been reported using terahertz frequency comb generated by chip-scale quantum cascade laser [79]. It demonstrates spectral imaging of chemicals including glucose, lactose and histidine captured at multiple frequencies, showing the potential for future chip-scale biomedical applications.

4 Summary and outlook

To sum up, the recent progress on spectral imaging and spectral LIDAR systems has been presented. The modern applications of state-of-the-art systems in the past decade have been reviewed from the system implementation perspective. The modern application areas are categorized as environmental monitoring, autonomous driving, biomedical imaging, biometric identification, archaeology and art conservation. Furthermore, the recent research works on nanophotonics-based spectral imaging and LIDAR sensing systems have been summarized and presented. The progress to make the compact systems have been reviewed from the system-level research and development perspective. The recent research works in the past decade are mainly based on subwavelength-scale nanostructures for spectral imaging, and chip-scale-integrated optical frequency combs for LIDAR systems. The nanostructures are engineered to achieve achromatic functional devices or narrow-band spectral filters for spectral imaging. Also, high-Q microring resonators are used to generate optical frequency combs for LIDAR systems to sense with high precision and fast rate. This review provides a summary of spectral imaging and spectral LIDAR systems, covering from the modern applications of the state-of-the-art systems to the development of the compact nanophotonics-based sensing systems.

In the near future, research and development efforts can be focused on the following three aspects, to facilitate the miniaturization of the multispectral/hyperspectral imaging and LIDAR systems. Firstly, with the demonstration of various metasurface-based functional devices, especially the spectral filters [169], [170], [171], [172], [173], [174], [175], it is expected to have more system-level integration of these devices (e.g., monolithic integration with CIS), to achieve compact spectral imaging systems. The monolithic fabrication process can be developed based on the mature CMOS fabrication technology. More functional optical components using the materials on the existing CMOS-compatible fabrication line, including Si [51], [176], SiO2 [177], SiN [178] and AlN [179], [180], are expected to be demonstrated. More large-area metasurface-based optical components are expected to be realized by deep UV photolithography for mass manufacturing [113]. Furthermore, the metasurface-based devices in combination with the microelectromechanical systems (MEMS) or flexible materials [83], [181], [182], [183] will enable the compact imaging systems with active tuning capability.

Secondly, within the current nanophotonics-based LIDAR systems [34], [69], [78], the only chip-scale component is the optical frequency comb source. Hence, it is expected to integrate rest of the optical components, including optical phased-arrays [159] and photodetectors [184], with the light source [142], [168] to realize a fully integrated comb-based coherent LIDAR system with high frame rate. The frequency comb-based optical signal synthesizer reported in the study by Spencer et al. [145] is a remarkable achievement in this direction. Furthermore, as mentioned earlier, the compact nanophotonics-based spectral LIDAR system to obtain the spectral information of the object can be developed. Such compact system should be able to drive the 3D spectral imaging and sensing application from table-top toward portable devices for consumer electronics. Also, LIDAR operation wavelength around 1.55 μm will be preferred since it is located at the eye-safety wavelength region. This wavelength has advantage in facial recognition since it is insensitive to human skin-tones, and can distinguish different materials such as plastic and resin [23], which is vital for antispoofing purpose.

Last but not the least, based on the CMOS fabrication and integration platform, more functional devices based on multiphysics coupling mechanisms can be developed to be integrated with the imaging and sensing systems. One example is optical modulator or nonmagnetic isolator based on acoustic-optic coupling mechanism leveraging on the piezoelectric effect of AlN on nanophotonics-based integrated platforms [162], [163], [164]. AlN is a suitable material for such multiphysics coupling since it not only provides piezoelectric effect, but also exhibits excellent optical properties, including wide transparency window from UV to MIR [153], significant second- and third-order nonlinear optical effects [185], [186], [187], [188]. Also, AlN can be deposited and patterned on Si substrate using the CMOS-compatible fabrication process, which ensures the mass production of the devices with low cost [189], [190]. Another example is an optical switch based on optoelectro-mechanical effect reported in the study by Haffner et al. [191]. The demonstrated optical devices have the potential to enable the reprogrammable optical system in large scale.

List of abbreviation

ADAS

advanced driver-assistance systems

AFG

arbitrary functional generator

AOM

acoustic-optic modulator

CCD

charge-coupled device

CCW

counter-clock-wise

CIRC

circulator

CIS

CMOS image sensor

CMOS

complementary metal-oxide-semiconductor

CW

clock-wise

DEMUX

de-multiplexer

DKS

dissipative Kerr soliton

EDFA

erbium-doped fiber amplifier

EOM

electro-optic modulator

ESA

electric signal analyzer

FBG

fiber Bragg grating

FIR

far infrared

FMCW

frequency modulated continuous wave

FWM

four wave mixing

GPS

global positioning system

IgG

Immunoglobulin G

IMU

inertial measurement unit

IR

infrared

LED

light emitting diode

LIDAR

light detection and ranging

LWIR

long-wavelength infrared

MB+

methylene blue

MEMS

microelectromechanical system

MIR

mid infrared

MSOT

multispectral optoacoustic tomography

NHA

nanohole array

NIR

near infrared

OCT

optical coherence tomography

OSA

optical spectrum analyzer

PC

polarization controller

PD

photodetector

PDMS

Polydimethylsiloxane

Q

quality factor

RGB

red, green, blue

SD

standard deviation

SPAD

single photon avalanche photodetector

SWIR

short wavelength infrared

SEM

scanning electron microscopy

ToF

time of flight

UV

ultraviolet


Corresponding author: Nanxi Li, Institute of Microelectronics, A*STAR (Agency for Science, Technology and Research), 2 Fusionopolis Way, Singapore138634, Singapore, E-mail:

Award Identifier / Grant number: A1789a0024, A19B3a0008

Acknowledgment

The authors thank Dr. Chen Liu and Dr. Elaine Cristina Schubert Barretto for the valuable discussions.

  1. Author statement: All the authors have accepted the responsibility for the content of this review and approved submission.

  2. Research funding: This work is supported by Agency for Science, Technology and Research, Singapore IAF-PP A19B3a0008 and IAF-PP A1789a0024.

  3. Conflict of interest statement: The authors declare no conflict of interest.

References

[1] P. G. Hancke, D. B. Silva, and H. P. GerhardJr., “The role of advanced sensing in smart cities,” Sensors, vol. 13, no. 1, 2013.10.3390/s130100393Search in Google Scholar PubMed PubMed Central

[2] Q. Li, X. He, Y. Wang, H. Liu, D. Xu, and F. Guo, “Review of spectral imaging technology in biomedical engineering: achievements and challenges,” J. Biomed. Opt., vol. 18, no. 10, pp. 1–29, 2013, https://doi.org/10.1117/1.jbo.18.3.030501.Search in Google Scholar PubMed PubMed Central

[3] A. F. H. Goetz, G. Vane, J. E. Solomon, and B. N. Rock, “Imaging spectrometry for earth remote sensing,” Science, vol. 228, no. 4704, p. 1147, 1985, https://doi.org/10.1126/science.228.4704.1147.Search in Google Scholar PubMed

[4] J. Transon, R. d’Andrimont, A. Maugnard, and P. Defourny, “Survey of hyperspectral earth observation applications from space in the sentinel-2 context,” Rem. Sens., vol. 10, no. 2, p. 157, 2018, https://doi.org/10.3390/rs10020157.Search in Google Scholar

[5] S. Weksler, O. Rozenstein, and E. Ben-Dor, “Mapping surface quartz content in sand dunes covered by biological soil crusts using airborne hyperspectral images in the longwave infrared region,” Minerals, vol. 8, no. 8, p. 318, 2018, https://doi.org/10.3390/min8080318.Search in Google Scholar

[6] M. K. Tripathi and H. Govil, “Evaluation of AVIRIS-NG hyperspectral images for mineral identification and mapping,” Heliyon, vol. 5, no. 11, p. e02931, 2019, https://doi.org/10.1016/j.heliyon.2019.e02931.Search in Google Scholar PubMed PubMed Central

[7] H. Eichstaedt, T. Tsedenbaljir, R. Kahnt, et al., “Quantitative estimation of clay minerals in airborne hyperspectral data using a calibration field,” J. Appl. Rem. Sens., vol. 14, no. 3, pp. 1–17, 2020, https://doi.org/10.1117/1.jrs.14.034524.Search in Google Scholar

[8] A. Picon, A. Bereciartua, J. Echazarra, O. Ghita, P. F. Whelan, and P. M. Iriondo, “Real-time hyperspectral processing for automatic nonferrous material sorting,” J. Electron. Imag., vol. 21, no. 1, pp. 1–10, 2012, https://doi.org/10.1117/1.jei.21.1.013018.Search in Google Scholar

[9] L. M. Kandpal, J. Tewari, N. Gopinathan, P. Boulas, and B.-K. Cho, “In-process control assay of pharmaceutical microtablets using hyperspectral imaging coupled with multivariate analysis,” Anal. Chem., vol. 88, no. 22, pp. 11055–11061, 2016, https://doi.org/10.1021/acs.analchem.6b02969.Search in Google Scholar PubMed

[10] C. Hopkinson, L. Chasmer, C. Gynan, C. Mahoney, and M. Sitar, “Multisensor and multispectral LiDAR characterization and classification of a forest environment,” Can. J. Rem. Sens., vol. 42, no. 5, pp. 501–520, 2016, https://doi.org/10.1080/07038992.2016.1196584.Search in Google Scholar

[11] S. Morsy, A. Shaker, and A. El-Rabbany, “Multispectral LiDAR data for land cover classification of urban areas,” Sensors, vol. 17, no. 5, p. 958, 2017.10.3390/s17050958Search in Google Scholar PubMed PubMed Central

[12] L.-Z. Huo, C. A. Silva, C. Klauberg, et al., “Supervised spatial classification of multispectral LiDAR data in urban areas,” PloS One, vol. 13, no. 10, p. e0206185, 2018, https://doi.org/10.1371/journal.pone.0206185.Search in Google Scholar PubMed PubMed Central

[13] G. Zhao, M. Ljungholm, E. Malmqvist, et al., “Inelastic hyperspectral lidar for profiling aquatic ecosystems,” Laser Photonics Rev., vol. 10, no. 5, pp. 807–813, 2016, https://doi.org/10.1002/lpor.201600093.Search in Google Scholar

[14] P. S. Roy, “Spectral reflectance characteristics of vegetation and their use in estimating productive potential,” Proc. Plant Sci., vol. 99, no. 1, pp. 59–81, 1989.10.1007/BF03053419Search in Google Scholar

[15] A. L. Pisello, G. Pignatta, V. L. Castaldo, and F. Cotana, “Experimental analysis of natural gravel covering as cool roofing and cool pavement,” Sustainability, vol. 6, no. 8, 2014, https://doi.org/10.3390/su6084706.Search in Google Scholar

[16] E. Puttonen, J. Suomalainen, T. Hakala, and J. Peltoniemi, “Measurement of reflectance properties of asphalt surfaces and their usability as reference targets for aerial photos,” IEEE Trans. Geosci. Rem. Sens., vol. 47, no. 7, pp. 2330–2339, 2009, https://doi.org/10.1109/tgrs.2008.2010132.Search in Google Scholar

[17] J. F. Bell, S. W. Squyres, R. E. Arvidson, et al., “Pancam multispectral imaging results from the opportunity rover at meridiani planum,” Science, vol. 306, no. 5702, p. 1703, 2004, https://doi.org/10.1126/science.1105245.Search in Google Scholar PubMed

[18] J. Taher, Deep Learning for Road Area Semantic Segmentation in Multispectral Lidar Data, Master Thesis, Aalto University, 2019.Search in Google Scholar

[19] K. Takumi, K. Watanabe, Q. Ha, A. Tejero-De-Pablos, Y. Ushiku, and T. Harada, “Multispectral object detection for autonomous vehicles,” Proc. Themat. Workshop ACM Multimed., vol. 2017, pp. 35–43, 2017.10.1145/3126686.3126727Search in Google Scholar

[20] Y. Choi, N. Kim, S. Hwang, et al., “KAIST multi-spectral day/night data set for autonomous and assisted driving,” IEEE Trans. Intell. Transport. Syst., vol. 19, no. 3, pp. 934–948, 2018, https://doi.org/10.1109/tits.2018.2791533.Search in Google Scholar

[21] Z. Hu, C. Fang, B. Li, et al., “First-in-human liver-tumour surgery guided by multispectral fluorescence imaging in the visible and near-infrared-I/II windows,” Nat. Biomed. Eng., vol. 4, no. 3, pp. 259–271, 2020, https://doi.org/10.1038/s41551-019-0494-0.Search in Google Scholar PubMed

[22] G. Lu and B. Fei, “Medical hyperspectral imaging: A review,” J. Biomed. Opt., vol. 19, no. 1, pp. 1–24, 2014, https://doi.org/10.1117/1.jbo.19.1.010901.Search in Google Scholar

[23] H. Steiner, S. Sporrer, A. Kolb, and N. Jung, “Design of an active multispectral SWIR camera system for skin detection and face verification,” J. Sens., vol. 2016, p. 9682453, 2015.10.1155/2016/9682453Search in Google Scholar

[24] N. T. Vetrekar, R. Raghavendra, and R. S. Gad, “Low-cost multi-spectral face imaging for robust face recognition,” in 2016 IEEE International Conference on Imaging Systems and Techniques (IST), Chania, Greece, IEEE, 2016, pp. 324–329.10.1109/IST.2016.7738245Search in Google Scholar

[25] H. Liang, “Advances in multispectral and hyperspectral imaging for archaeology and art conservation,” Appl. Phys. A, vol. 106, no. 2, pp. 309–323, 2012, https://doi.org/10.1007/s00339-011-6689-1.Search in Google Scholar

[26] F. Daniel, A. Mounier, J. Pérez-Arantegui, et al., “Hyperspectral imaging applied to the analysis of Goya paintings in the Museum of Zaragoza (Spain),” Microchem. J., vol. 126, pp. 113–120, 2016, https://doi.org/10.1016/j.microc.2015.11.044.Search in Google Scholar

[27] H. Park and K. B. Crozier, “Multispectral imaging with vertical silicon nanowires,” Sci. Rep., vol. 3, no. 1, p. 2460, 2013, https://doi.org/10.1038/srep02460.Search in Google Scholar PubMed PubMed Central

[28] L. Duempelmann, B. Gallinet, and L. Novotny, “Multispectral imaging with tunable plasmonic filters,” ACS Photonics, vol. 4, no. 2, pp. 236–241, 2017, https://doi.org/10.1021/acsphotonics.6b01003.Search in Google Scholar

[29] Z. Wang, S. Yi, A. Chen, et al., “Single-shot on-chip spectral sensors based on photonic crystal slabs,” Nat. Commun., vol. 10, no. 1, p. 1020, 2019, https://doi.org/10.1038/s41467-019-08994-5.Search in Google Scholar PubMed PubMed Central

[30] N. A. Hagen and M. W. Kudenov, “Review of snapshot spectral imaging technologies,” Opt. Eng., vol. 52, no. 9, pp. 1–23, 2013, https://doi.org/10.1117/1.oe.52.9.090901.Search in Google Scholar

[31] M. J. Khan, H. S. Khan, A. Yousaf, K. Khurshid, and A. Abbas, “Modern trends in hyperspectral image analysis: A review,” IEEE Access, vol. 6, pp. 14118–14129, 2018, https://doi.org/10.1109/access.2018.2812999.Search in Google Scholar

[32] H. Liu, B. Bruning, T. Garnett, and B. Berger, “Hyperspectral imaging and 3D technologies for plant phenotyping: from satellite to close-range sensing,” Comput. Electron. Agric., vol. 175, p. 105621, 2020, https://doi.org/10.1016/j.compag.2020.105621.Search in Google Scholar

[33] Y. D. Shah, P. W. R. Connolly, J. P. Grant, et al., “Ultralow-light-level color image reconstruction using high-efficiency plasmonic metasurface mosaic filters,” Optica, vol. 7, no. 6, pp. 632–639, 2020, https://doi.org/10.1364/optica.389905.Search in Google Scholar

[34] P. Trocha, M. Karpov, D. Ganin, et al., “Ultrafast optical ranging using microresonator soliton frequency combs,” Science, vol. 359, no. 6378, p. 887, 2018, https://doi.org/10.1126/science.aao3924.Search in Google Scholar PubMed

[35] J. M. Jurado, L. Ortega, J. J. Cubillas, and F. R. Feito, “Multispectral mapping on 3D models and multi-temporal monitoring for individual characterization of olive trees,” Rem. Sens., vol. 12, no. 7, 2020, https://doi.org/10.3390/rs12071106.Search in Google Scholar

[36] Y. Torres, J. J. Arranz, J. M. Gaspar-Escribano, et al., “Integration of LiDAR and multispectral images for rapid exposure and earthquake vulnerability estimation. Application in Lorca, Spain,” Int. J. Appl. Earth Obs., vol. 81, pp. 161–175, 2019, https://doi.org/10.1016/j.jag.2019.05.015.Search in Google Scholar

[37] V. Neuschmelting, S. Harmsen, N. Beziere, et al., “Dual-modality surface-enhanced resonance Raman scattering and multispectral optoacoustic tomography nanoparticle approach for brain tumor delineation,” Small, vol. 14, no. 23, p. 1800740, 2018, https://doi.org/10.1002/smll.201800740.Search in Google Scholar PubMed PubMed Central

[38] S.-J. Park, C. J. H. Ho, S. Arai, A. Samanta, M. Olivo, and Y.-T. Chang, “Visualizing Alzheimer’s disease mouse brain with multispectral optoacoustic tomography using a fluorescent probe, CDnir7,” Sci. Rep., vol. 9, no. 1, p. 12052, 2019, https://doi.org/10.1038/s41598-019-48329-4.Search in Google Scholar PubMed PubMed Central

[39] D. Zhang, Z. Guo, and Y. Gong, “Multispectral Iris acquisition system,” in Multispectral Biometrics: Systems and Applications, D. Zhang, Z. Guo, and Y. Gong, Eds., Springer International Publishing, 2016, pp. 39–62.10.1007/978-3-319-22485-5_3Search in Google Scholar

[40] M. E. Hussein, L. Spinoulas, F. Xiong, and W. Abd-Almageed, “Fingerprint presentation attack detection using a novel multi-spectral capture device and patch-based convolutional neural networks,” in 2018 IEEE International Workshop on Information Forensics and Security (WIFS), Hong Kong, IEEE, 2018, pp. 1–8.10.1109/WIFS.2018.8630773Search in Google Scholar

[41] D. Zhang, Z. Guo, and Y. Gong, “An online system of multispectral palmprint verification,” in Multispectral Biometrics: Systems and Applications, D. Zhang, Z. Guo, and Y. Gong, Eds., Springer International Publishing, 2016, pp. 117–137.10.1007/978-3-319-22485-5_6Search in Google Scholar

[42] H. Mahgoub, H. Chen, J. Gilchrist, T. Fearn, and M. Strlic, “Quantitative chemical near-infrared hyperspectral imaging of Islamic paper,” in ICOM-CC 18th Triennial Conference Preprints (International Council of Museums), Copenhagen, ICOM Committee for Conservation, 2017, p. 1606.Search in Google Scholar

[43] M. Picollo, A. Casini, C. Cucci, J. Jussila, M. Poggesi, and L. Stefani, “A new compact VNIR hyperspectral imaging system for non-invasive analysis in the FineArt and architecture fields,” Proc. e Rep., pp. 69–74, 2018, https://doi.org/10.36253/978-88-6453-707-8.16.Search in Google Scholar

[44] M. Shimoni, R. Haelterman, and C. Perneel, “Hypersectral imaging for military and security applications: combining myriad processing and sensing techniques,” IEEE Trans. Geosci. Rem. Sens., vol. 7, no. 2, pp. 101–117, 2019, https://doi.org/10.1109/mgrs.2019.2902525.Search in Google Scholar

[45] X. Briottet, Y. Boucher, A. Dimmeler, et al., “Military applications of hyperspectral imagery,” in Targets and Backgrounds XII: Characterization and Representation, vol. 6239, Orlando, Florida, United States, Proc. SPIE, 2006, p. 62390B.10.1117/12.672030Search in Google Scholar

[46] V. Gujrati, A. Mishra, and V. Ntziachristos, “Molecular imaging probes for multi-spectral optoacoustic tomography,” Chem. Commun., vol. 53, no. 34, pp. 4653–4672, 2017, https://doi.org/10.1039/c6cc09421j.Search in Google Scholar PubMed

[47] X. Ai, Z. Wang, H. Cheong, et al., “Multispectral optoacoustic imaging of dynamic redox correlation and pathophysiological progression utilizing upconversion nanoprobes,” Nat. Commun., vol. 10, no. 1, p. 1087, 2019, https://doi.org/10.1038/s41467-019-09001-7.Search in Google Scholar PubMed PubMed Central

[48] J. Yoon, J. Joseph, D. J. Waterhouse, et al., “A clinically translatable hyperspectral endoscopy (HySE) system for imaging the gastrointestinal tract,” Nat. Commun., vol. 10, no. 1, p. 1902, 2019, https://doi.org/10.1038/s41467-019-09484-4.Search in Google Scholar PubMed PubMed Central

[49] H. Fabelo, S. Ortega, D. Ravi, et al., “Spatio-spectral classification of hyperspectral images for brain cancer detection during surgical operations,” PloS One, vol. 13, no. 3, p. e0193721, 2018, https://doi.org/10.1371/journal.pone.0193721.Search in Google Scholar PubMed PubMed Central

[50] H. Fabelo, M. Halicek, S. Ortega, et al., “Surgical aid visualization system for glioblastoma tumor identification based on deep learning and in-vivo hyperspectral images of human patients,” in Medical Imaging 2019: Image-Guided Procedures, Robotic Interventions, and Modeling, vol. 10951, San Diego, California, United States, Proc. SPIE, 2019, p. 1095110.10.1117/12.2512569Search in Google Scholar

[51] T. Hu, Q. Zhong, N. Li, et al., “CMOS-compatible a-Si metalenses on a 12-inch glass wafer for fingerprint imaging,” Nanophotonics, vol. 9, no. 4, 2020, https://doi.org/10.1515/nanoph-2019-0470.Search in Google Scholar

[52] B. A. Ganji and M. S. Nateri, “A high sensitive MEMS capacitive fingerprint sensor using slotted membrane,” Microsyst. Technol., vol. 19, no. 1, pp. 121–129, 2013, https://doi.org/10.1007/s00542-012-1647-1.Search in Google Scholar

[53] H. Hassan and H.-W. Kim, “CMOS capacitive fingerprint sensor based on differential sensing circuit with noise cancellation,” Sensors, vol. 18, no. 7, 2018, https://doi.org/10.3390/s18072200.Search in Google Scholar PubMed PubMed Central

[54] X. Jiang, Y. Lu, H.-Y. Tang, et al., “Monolithic ultrasound fingerprint sensor,” Microsyst. Nanoeng., vol. 3, no. 1, p. 17059, 2017, https://doi.org/10.1038/micronano.2017.59.Search in Google Scholar PubMed PubMed Central

[55] R. K. Rowe, K. A. Nixon, and P. W. Butler, “Multispectral fingerprint image acquisition,” in Advances in Biometrics, Springer, 2008, pp. 3–23.10.1007/978-1-84628-921-7_1Search in Google Scholar

[56] C. Cucci, A Casini, L. Stefani, M. Picollo, and J. Jussila, “Bridging research with innovative products: A compact hyperspectral camera for investigating artworks: A feasibility study,” in Optics for Arts, Architecture, and Archaeology VI, vol. 10331, Proc. SPIE, Munich, Germany, 2017, p. 1033106.10.1117/12.2270015Search in Google Scholar

[57] C. Cucci, J. K. Delaney, and M. Picollo, “Reflectance hyperspectral imaging for investigation of works of art: old master paintings and illuminated manuscripts,” Acc. Chem. Res., vol. 49, no. 10, pp. 2070–2079, 2016, https://doi.org/10.1021/acs.accounts.6b00048.Search in Google Scholar PubMed

[58] A. F. Koenderink, A. Alù, and A. Polman, “Nanophotonics: shrinking light-based technology,” Science, vol. 348, no. 6234, p. 516, 2015, https://doi.org/10.1126/science.1261243.Search in Google Scholar PubMed

[59] C. Sorace-Agaskar, P. T. Callahan, K. Shtyrkova, et al., “Integrated mode-locked lasers in a CMOS-compatible silicon photonic platform,” in CLEO: 2015, OSA Technical Digest (Online), San Jose, California, United States, Optical Society of America, 2015, p. SM2I.5.10.1364/CLEO_SI.2015.SM2I.5Search in Google Scholar

[60] K. Shtyrkova, P. T. Callahan, N. Li, et al., “Integrated CMOS-compatible Q-switched mode-locked lasers at 1900 nm with an on-chip artificial saturable absorber,” Opt. Express, vol. 27, no. 3, pp. 3542–3556, 2019, https://doi.org/10.1364/oe.27.003542.Search in Google Scholar

[61] N. Li, M. Xin, Z. Su, et al., “A silicon photonic data link with a monolithic erbium-doped laser,” Sci. Rep., vol. 10, no. 1, p. 1114, 2020, https://doi.org/10.1038/s41598-020-57928-5.Search in Google Scholar PubMed PubMed Central

[62] N. Li. Purnawirman, E. S. Magden, G. Singh, et al., “Wavelength division multiplexed light source monolithically integrated on a silicon photonics platform,” Opt. Lett., vol. 42, no. 9, pp. 1772–1775, 2017, https://doi.org/10.1364/ol.42.001772.Search in Google Scholar PubMed

[63] Z. Su, N. Li, H. C. Frankis, et al., “High-Q-factor Al2O3 micro-trench cavities integrated with silicon nitride waveguides on silicon,” Opt. Express, vol. 26, no. 9, pp. 11161–11170, 2018, https://doi.org/10.1364/oe.26.011161.Search in Google Scholar

[64] E. S. Magden, N. Li, M. Raval, et al., “Transmissive silicon photonic dichroic filters with spectrally selective waveguides,” Nat. Commun., vol. 9, no. 1, p. 3009, 2018, https://doi.org/10.1038/s41467-018-05287-1.Search in Google Scholar PubMed PubMed Central

[65] N. Li, Z. Su, Purnawirman, et al., “Athermal synchronization of laser source with WDM filter in a silicon photonics platform,” Appl. Phys. Lett., vol. 110, no. 21, p. 211105, 2017, https://doi.org/10.1063/1.4984022.Search in Google Scholar PubMed PubMed Central

[66] Q. Chen, X. Hu, L. Wen, Y. Yu, and D. R. S. Cumming, “Nanophotonic image sensors,” Small, vol. 12, no. 36, pp. 4922–4935, 2016, https://doi.org/10.1002/smll.201600528.Search in Google Scholar PubMed PubMed Central

[67] F. Yesilkoy, E. R. Arvelo, Y. Jahani, et al., “Ultrasensitive hyperspectral imaging and biodetection enabled by dielectric metasurfaces,” Nat. Photonics, vol. 13, no. 6, pp. 390–396, 2019, https://doi.org/10.1038/s41566-019-0394-6.Search in Google Scholar

[68] M. Faraji-Dana, E. Arbabi, H. Kwon, et al., “Hyperspectral imager with folded metasurface optics,” ACS Photonics, vol. 6, no. 8, pp. 2161–2167, 2019, https://doi.org/10.1021/acsphotonics.9b00744.Search in Google Scholar

[69] J. Riemensberger, A. Lukashchuk, M. Karpov, et al., “Massively parallel coherent laser ranging using a soliton microcomb,” Nature, vol. 581, no. 7807, pp. 164–170, 2020, https://doi.org/10.1038/s41586-020-2239-3.Search in Google Scholar PubMed

[70] M. Khorasaninejad, W. T. Chen, A. Y. Zhu, et al., “Multispectral chiral imaging with a metalens,” Nano Lett., vol. 16, no. 7, pp. 4595–4600, 2016, https://doi.org/10.1021/acs.nanolett.6b01897.Search in Google Scholar PubMed

[71] H. Qi, L. Kong, C. Wang, and L. Miao, “A hand-held mosaicked multispectral imaging device for early stage pressure ulcer detection,” J. Med. Syst., vol. 35, no. 5, pp. 895–904, 2011, https://doi.org/10.1007/s10916-010-9508-x.Search in Google Scholar PubMed

[72] L. Frey, P. Parrein, J. Raby, et al., “Color filters including infrared cut-off integrated on CMOS image sensor,” Opt. Express, vol. 19, no. 14, pp. 13073–13080, 2011, https://doi.org/10.1364/oe.19.013073.Search in Google Scholar PubMed

[73] S. Nishiwaki, T. Nakamura, M. Hiramoto, T. Fujii, and M. Suzuki, “Efficient colour splitters for high-pixel-density image sensors,” Nat. Photonics, vol. 7, no. 3, pp. 240–246, 2013, https://doi.org/10.1038/nphoton.2012.345.Search in Google Scholar

[74] W.-Y. Jang, Z. Ku, J. Jeon, et al., “Experimental demonstration of adaptive infrared multispectral imaging using plasmonic filter array,” Sci. Rep., vol. 6, no. 1, p. 34876, 2016, https://doi.org/10.1038/srep34876.Search in Google Scholar PubMed PubMed Central

[75] M. Najiminaini, F. Vasefi, B. Kaminska, and J. J. L. Carson, “Nanohole-array-based device for 2D snapshot multispectral imaging,” Sci. Rep., vol. 3, no. 1, p. 2589, 2013, https://doi.org/10.1038/srep02589.Search in Google Scholar PubMed PubMed Central

[76] S. P. Burgos, S. Yokogawa, and H. A. Atwater, “Color imaging via nearest neighbor hole coupling in plasmonic color filters integrated onto a complementary metal-oxide semiconductor image sensor,” ACS Nano, vol. 7, no. 11, pp. 10038–10047, 2013, https://doi.org/10.1021/nn403991d.Search in Google Scholar PubMed

[77] A. Tittl, A. Leitis, M. Liu, et al., “Imaging-based molecular barcoding with pixelated dielectric metasurfaces,” Science, vol. 360, no. 6393, p. 1105, 2018, https://doi.org/10.1126/science.aas9768.Search in Google Scholar PubMed

[78] M.-G. Suh and K. J. Vahala, “Soliton microcomb range measurement,” Science, vol. 359, no. 6378, p. 884, 2018, https://doi.org/10.1126/science.aao1968.Search in Google Scholar PubMed

[79] L. A. Sterczewski, J. Westberg, Y. Yang, et al., “Terahertz hyperspectral imaging with dual chip-scale combs,” Optica, vol. 6, no. 6, pp. 766–771, 2019, https://doi.org/10.1364/optica.6.000766.Search in Google Scholar

[80] N. Yu, P. Genevet, M. A. Kats, et al., “Light propagation with phase discontinuities: generalized laws of reflection and refraction,” Science, vol. 334, no. 6054, p. 333, 2011, https://doi.org/10.1126/science.1210713.Search in Google Scholar PubMed

[81] N. Yu and F. Capasso, “Flat optics with designer metasurfaces,” Nat. Mater., vol. 13, no. 2, pp. 139–150, 2014, https://doi.org/10.1038/nmat3839.Search in Google Scholar PubMed

[82] F. Aieta, P. Genevet, M. A. Kats, et al., “Aberration-free ultrathin flat lenses and axicons at telecom wavelengths based on plasmonic metasurfaces,” Nano Lett., vol. 12, no. 9, pp. 4932–4936, 2012, https://doi.org/10.1021/nl302516v.Search in Google Scholar PubMed

[83] S. Zhang, M.-H. Kim, F. Aieta, et al., “High efficiency near diffraction-limited mid-infrared flat lenses based on metasurface reflectarrays,” Opt. Express, vol. 24, no. 16, pp. 18024–18034, 2016, https://doi.org/10.1364/oe.24.018024.Search in Google Scholar PubMed

[84] I. Koirala, S.-S. Lee, and D.-Y. Choi, “Highly transmissive subtractive color filters based on an all-dielectric metasurface incorporating TiO2 nanopillars,” Opt. Express, vol. 26, no. 14, pp. 18320–18330, 2018.10.1364/OE.26.018320Search in Google Scholar PubMed

[85] W. Yue, S. Gao, S.-S. Lee, E.-S. Kim, and D.-Y. Choi, “Highly reflective subtractive color filters capitalizing on a silicon metasurface integrated with nanostructured aluminum mirrors,” Laser Photonics Rev., vol. 11, no. 3, p. 1600285, 2017, https://doi.org/10.1002/lpor.201600285.Search in Google Scholar

[86] N. Yu, F. Aieta, P. Genevet, M. A. Kats, Z. Gaburro, and F. Capasso, “A broadband, background-free quarter-wave plate based on plasmonic metasurfaces,” Nano Lett., vol. 12, no. 12, pp. 6328–6333, 2012, https://doi.org/10.1021/nl303445u.Search in Google Scholar PubMed

[87] Z. H. Jiang, L. Lin, D. Ma, et al., “Broadband and wide field-of-view plasmonic metasurface-enabled waveplates,” Sci. Rep., vol. 4, p. 7511, 2014, https://doi.org/10.1038/srep07511.Search in Google Scholar PubMed PubMed Central

[88] Y. F. Yu, A. Y. Zhu, R. Paniagua-Domínguez, Y. H. Fu, B. Luk’yanchuk, and A. I. Kuznetsov, “High-transmission dielectric metasurface with 2π phase control at visible wavelengths,” Laser Photonics Rev., vol. 9, no. 4, pp. 412–418, 2015, https://doi.org/10.1002/lpor.201500041.Search in Google Scholar

[89] Z. Zhou, J. Li, R. Su, et al., “Efficient silicon metasurfaces for visible light,” ACS Photonics, vol. 4, no. 3, pp. 544–551, 2017, https://doi.org/10.1021/acsphotonics.6b00740.Search in Google Scholar

[90] K. Chen, J. Deng, N. Zhou, et al., “2π-space uniform-backscattering metasurfaces enabled with geometric phase and magnetic resonance in visible light,” Opt. Express, vol. 28, no. 8, pp. 12331–12341, 2020, https://doi.org/10.1364/oe.389932.Search in Google Scholar PubMed

[91] Z. Li, Q. Dai, M. Q. Mehmood, et al., “Full-space cloud of random points with a scrambling metasurface,” Light Sci. Appl., vol. 7, no. 1, p. 63, 2018, https://doi.org/10.1038/s41377-018-0064-3.Search in Google Scholar PubMed PubMed Central

[92] Y.-W. Huang, H. W. H. Lee, R. Sokhoyan, et al., “Gate-tunable conducting oxide metasurfaces,” Nano Lett., vol. 16, no. 9, pp. 5319–5325, 2016, https://doi.org/10.1021/acs.nanolett.6b00555.Search in Google Scholar PubMed

[93] G. Kafaie Shirmanesh, R. Sokhoyan, R. A. Pala, and H. A. Atwater, “Dual-gated active metasurface at 1550 nm with wide (>300°) phase tunability,” Nano Lett., vol. 18, no. 5, pp. 2957–2963, 2018, https://doi.org/10.1021/acs.nanolett.8b00351.Search in Google Scholar PubMed

[94] P. C. Wu, R. A. Pala, G. Kafaie Shirmanesh, et al., “Dynamic beam steering with all-dielectric electro-optic III–V multiple-quantum-well metasurfaces,” Nat. Commun., vol. 10, no. 1, p. 3654, 2019, https://doi.org/10.1038/s41467-019-11598-8.Search in Google Scholar PubMed PubMed Central

[95] E. Khaidarov, Z. Liu, R. Paniagua-Domínguez, et al., “Control of LED emission with functional dielectric metasurfaces,” Laser Photonics Rev., vol. 14, no. 1, p. 1900235, 2020, https://doi.org/10.1002/lpor.201900235.Search in Google Scholar

[96] Y.-Y. Xie, P.-N. Ni, Q.-H. Wang, et al., “Metasurface-integrated vertical cavity surface-emitting lasers for programmable directional lasing emissions,” Nat. Nanotechnol., vol. 15, no. 2, pp. 125–130, 2020, https://doi.org/10.1038/s41565-019-0611-y.Search in Google Scholar PubMed

[97] W. T. Chen, A. Y. Zhu, and F. Capasso, “Flat optics with dispersion-engineered metasurfaces,” Nat. Rev. Mater., vol. 5, no. 8, pp. 604–620, 2020, https://doi.org/10.1038/s41578-020-0203-3.Search in Google Scholar

[98] Z. Shi, M. Khorasaninejad, Y.-W. Huang, et al., “Single-layer metasurface with controllable multiwavelength functions,” Nano Lett., vol. 18, no. 4, pp. 2420–2427, 2018, https://doi.org/10.1021/acs.nanolett.7b05458.Search in Google Scholar PubMed

[99] S. Wang, P. C. Wu, V.-C. Su, et al., “Broadband achromatic optical metasurface devices,” Nat. Commun., vol. 8, no. 1, p. 187, 2017, https://doi.org/10.1038/s41467-017-00166-7.Search in Google Scholar PubMed PubMed Central

[100] F. Aieta, M. A. Kats, P. Genevet, and F. Capasso, “Multiwavelength achromatic metasurfaces by dispersive phase compensation,” Science, vol. 347, no. 6228, pp. 1342–1345, 2015, https://doi.org/10.1126/science.aaa2494.Search in Google Scholar PubMed

[101] W. T. Chen, A. Y. Zhu, V. Sanjeev, et al., “A broadband achromatic metalens for focusing and imaging in the visible,” Nat. Nanotechnol., vol. 13, no. 3, pp. 220–226, 2018, https://doi.org/10.1038/s41565-017-0034-6.Search in Google Scholar PubMed

[102] R. C. Devlin, M. Khorasaninejad, W. T. Chen, J. Oh, and F. Capasso, “Broadband high-efficiency dielectric metasurfaces for the visible spectrum,” Proc. Natl. Acad. Sci. U.S.A., vol. 113, no. 38, p. 10473, 2016, https://doi.org/10.1073/pnas.1611740113.Search in Google Scholar PubMed PubMed Central

[103] S. Wang, P. C. Wu, V.-C. Su, et al., “A broadband achromatic metalens in the visible,” Nat. Nanotechnol., vol. 13, no. 3, pp. 227–232, 2018, https://doi.org/10.1038/s41565-017-0052-4.Search in Google Scholar PubMed

[104] E. Arbabi, S. M. Kamali, A. Arbabi, and A. Faraon, “Full-Stokes imaging polarimetry using dielectric metasurfaces,” ACS Photonics, vol. 5, no. 8, pp. 3132–3140, 2018, https://doi.org/10.1021/acsphotonics.8b00362.Search in Google Scholar

[105] N. A. Rubin, G. D’Aversa, P. Chevalier, Z. Shi, W. T. Chen, and F. Capasso, “Matrix Fourier optics enables a compact full-Stokes polarization camera,” Science, vol. 365, no. 6448, p. eaax1839, 2019, https://doi.org/10.1126/science.aax1839.Search in Google Scholar PubMed

[106] Y. Intaravanne and X. Chen, “Recent advances in optical metasurfaces for polarization detection and engineered polarization profiles,” Nanophotonics, vol. 9, no. 5, pp. 1003–1014, 2020, https://doi.org/10.1515/nanoph-2019-0479.Search in Google Scholar

[107] M. Faraji-Dana, E. Arbabi, A. Arbabi, S. M. Kamali, H. Kwon, and A. Faraon, “Compact folded metasurface spectrometer,” Nat. Commun., vol. 9, no. 1, p. 4196, 2018, https://doi.org/10.1038/s41467-018-06495-5.Search in Google Scholar PubMed PubMed Central

[108] Q. Chen, D. Das, D. Chitnis, et al., “A CMOS image sensor integrated with plasmonic colour filters,” Plasmonics, vol. 7, no. 4, pp. 695–699, 2012, https://doi.org/10.1007/s11468-012-9360-6.Search in Google Scholar

[109] Q. Chen, D. Chitnis, K. Walls, T. D. Drysdale, S. Collins, and D. R. S. Cumming, “CMOS photodetectors integrated with plasmonic color filters,” IEEE Photonics Technol. Lett., vol. 24, no. 3, pp. 197–199, 2012, https://doi.org/10.1109/lpt.2011.2176333.Search in Google Scholar

[110] Y.-T. Yoon, S.-S. Lee, and B.-S. Lee, “Nano-patterned visible wavelength filter integrated with an image sensor exploiting a 90-nm CMOS process,” Photonics Nanostruct.: Fundam. Appl., vol. 10, no. 1, pp. 54–59, 2012, https://doi.org/10.1016/j.photonics.2011.07.002.Search in Google Scholar

[111] S. Yokogawa, S. P. Burgos, and H. A. Atwater, “Plasmonic color filters for CMOS image sensor applications,” Nano Lett., vol. 12, no. 8, pp. 4349–4354, 2012, https://doi.org/10.1021/nl302110z.Search in Google Scholar PubMed

[112] N. Li, H. Y. Fu, Y. Dong, et al., “Large-area pixelated metasurface beam deflector on a 12-inch glass wafer for random point generation,” Nanophotonics, vol. 8, no. 10, p. 1855, 2019, https://doi.org/10.1515/nanoph-2019-0208.Search in Google Scholar

[113] N. Li, Z. Xu, Y. Dong, et al., “Large-area metasurface on CMOS-compatible fabrication platform: driving flat optics from lab to fab,” Nanophotonics, vol. 9, no. 10, pp. 3071–3087, 2020, https://doi.org/10.1515/nanoph-2020-0063.Search in Google Scholar

[114] J. W. Stewart, G. M. Akselrod, D. R. Smith, and M. H. Mikkelsen, “Toward multispectral imaging with colloidal metasurface pixels,” Adv. Mater., vol. 29, no. 6, p. 1602971, 2017, https://doi.org/10.1002/adma.201602971.Search in Google Scholar PubMed

[115] Z. Xu, Y. Dong, C.-K. Tseng, et al., “CMOS-compatible all-Si metasurface polarizing bandpass filters on 12-inch wafers,” Opt. Express, vol. 27, no. 18, pp. 26060–26069, 2019, https://doi.org/10.1364/oe.27.026060.Search in Google Scholar PubMed

[116] Z. Xu, N. Li, Y. Dong, et al., “Metasurface-based subtractive color filter fabricated on a 12-inch glass wafer using CMOS platform,” Photonics Res., vol. 9, no. 1, pp. 13–20, 2020, https://doi.org/10.1364/prj.404124.Search in Google Scholar

[117] L. Kong, S. Stephen, M. G. Duckworth, et al., “Handheld erythema and bruise detector,” in Medical Imaging 2008: Computer-Aided Diagnosis, vol. 6915, San Diego, California, United States, Proc. SPIE, 2008, p. 69153K.10.1117/12.770718Search in Google Scholar

[118] L. Kong, D. Yi, S. H. Sprigle, et al., “Single sensor that outputs narrowband multispectral images,” J. Biomed. Opt., vol. 15, no. 1, pp. 1–3, 2010, https://doi.org/10.1117/1.3277669.Search in Google Scholar PubMed PubMed Central

[119] J. Berzinš, S. Fasold, T. Pertsch, S. M. B. Bäumer, and F. Setzpfandt, “Submicrometer nanostructure-based RGB filters for CMOS image sensors,” ACS Photonics, vol. 6, no. 4, pp. 1018–1025, 2019.10.1021/acsphotonics.9b00021Search in Google Scholar

[120] T. Xu, Y.-K. Wu, X. Luo, and L. J. Guo, “Plasmonic nanoresonators for high-resolution colour filtering and spectral imaging,” Nat. Commun., vol. 1, no. 1, p. 59, 2010, https://doi.org/10.1038/ncomms1058.Search in Google Scholar PubMed

[121] D. Fleischman, L. A. Sweatlock, H. Murakami, and H. Atwater, “Hyper-selective plasmonic color filters,” Opt. Express, vol. 25, no. 22, pp. 27386–27395, 2017, https://doi.org/10.1364/oe.25.027386.Search in Google Scholar

[122] I. J. H. McCrindle, J. P. Grant, L. C. P. Gouveia, and D. R. S. Cumming, “Infrared plasmonic filters integrated with an optical and terahertz multi-spectral material,” Phys. Status Solidi, vol. 212, no. 8, pp. 1625–1633, 2015, https://doi.org/10.1002/pssa.201431943.Search in Google Scholar

[123] J. Grant, I. J. H. McCrindle, C. Li, and D. R. S. Cumming, “Multispectral metamaterial absorber,” Opt. Lett., vol. 39, no. 5, pp. 1227–1230, 2014, https://doi.org/10.1364/ol.39.001227.Search in Google Scholar

[124] J. Bao and M. G. Bawendi, “A colloidal quantum dot spectrometer,” Nature, vol. 523, no. 7558, pp. 67–70, 2015, https://doi.org/10.1038/nature14576.Search in Google Scholar PubMed

[125] V. Jovanov, H. Stiebig, and D. Knipp, “Tunable multispectral color sensor with plasmonic reflector,” ACS Photonics, vol. 5, no. 2, pp. 378–383, 2018, https://doi.org/10.1021/acsphotonics.7b00402.Search in Google Scholar

[126] K. W. Mauser, S. Kim, S. Mitrovic, et al., “Resonant thermoelectric nanophotonics,” Nat. Nanotechnol., vol. 12, no. 8, pp. 770–775, 2017, https://doi.org/10.1038/nnano.2017.87.Search in Google Scholar PubMed

[127] A. Wang and Y. Dan, “Mid-infrared plasmonic multispectral filters,” Sci. Rep., vol. 8, no. 1, p. 11257, 2018, https://doi.org/10.1038/s41598-018-29177-0.Search in Google Scholar PubMed PubMed Central

[128] J. Feng, X. Li, Z. Shi, et al., “2D ductile transition metal chalcogenides (TMCs): novel high‐performance Ag2S nanosheets for ultrafast photonics,” Adv. Opt. Mater., vol. 8, no. 6, p. 1901762, 2020, https://doi.org/10.1002/adom.201901762.Search in Google Scholar

[129] T. Feng, X. Li, P. Guo, Y. Zhang, J. Liu, and H. Zhang, “MXene: two dimensional inorganic compounds, for generation of bound state soliton pulses in nonlinear optical system,” Nanophotonics, vol. 9, no. 8, pp. 2505–2513, 2020, https://doi.org/10.1515/nanoph-2020-0011.Search in Google Scholar

[130] D. Zhang, C. Zhang, X. Li, and A. Qyyum, “Layered iron pyrite for ultrafast photonics application,” Nanophotonics, vol. 9, no. 8, pp. 2515–2522, 2020, https://doi.org/10.1515/nanoph-2020-0014.Search in Google Scholar

[131] P. Guo, X. Li, T. Feng, Y. Zhang, and W. Xu, “Few-layer bismuthene for coexistence of harmonic and dual wavelength in a mode-locked fiber laser,” ACS Appl. Mater. Interfaces, vol. 12, no. 28, pp. 31757–31763, 2020, https://doi.org/10.1021/acsami.0c05325.Search in Google Scholar PubMed

[132] J. Feng, X. Li, G. Zhu, and Q. J. Wang, “Emerging high-performance SnS/CdS nanoflower heterojunction for ultrafast photonics,” ACS Appl. Mater. Interfaces, vol. 12, no. 38, pp. 43098–43105, 2020, https://doi.org/10.1021/acsami.0c12907.Search in Google Scholar PubMed

[133] Y. Zhao, W. Wang, X. Li, et al., “Functional porous MOF-derived CuO octahedra for harmonic soliton molecule pulses generation,” ACS Photonics, vol. 7, no. 9, pp. 2440–2447, 2020, https://doi.org/10.1021/acsphotonics.0c00520.Search in Google Scholar

[134] T. Feng, X. Li, T. Chai, et al., “Bismuthene nanosheets for 1 μm multipulse generation,” Langmuir, vol. 36, no. 1, pp. 3–8, 2020, https://doi.org/10.1021/acs.langmuir.9b01910.Search in Google Scholar PubMed

[135] Y. Zhang, X. Li, A. Qyyum, et al., “PbS nanoparticles for ultrashort pulse generation in optical communication region,” Part. Part. Syst. Char., vol. 35, no. 11, p. 1800341, 2018, https://doi.org/10.1002/ppsc.201800341.Search in Google Scholar

[136] Y. Wang, Y. Chen, X. Li, et al., “Optical-intensity modulator with InSb nanosheets,” Appl. Mater. Today, vol. 21, p. 100852, 2020, https://doi.org/10.1016/j.apmt.2020.100852.Search in Google Scholar

[137] X. Li, J. Feng, W. Mao†, F. Yin, and J. Jiang, “Emerging uniform Cu2O nanocubes for 251st harmonic ultrashort pulse generation,” J. Mater. Chem. C, vol. 8, no. 41, pp. 14386–14392, 2020, https://doi.org/10.1039/d0tc03622f.Search in Google Scholar

[138] Z. Gong, A. Bruch, M. Shen, et al., “High-fidelity cavity soliton generation in crystalline AlN micro-ring resonators,” Opt. Lett., vol. 43, no. 18, pp. 4366–4369, 2018, https://doi.org/10.1364/ol.43.004366.Search in Google Scholar PubMed

[139] J. Liu, H. Tian, E. Lucas, et al., “Monolithic piezoelectric control of soliton microcombs,” Nature, vol. 583, no. 7816, pp. 385–390, 2020, https://doi.org/10.1038/s41586-020-2465-8.Search in Google Scholar PubMed

[140] A. W. Bruch, X. Liu, Z. Gong, et al., “Pockels soliton microcomb,” Nat. Photonics, vol. 15, pp. 21–27, 2021. https://doi.org/10.1038/s41566-020-00704-8.Search in Google Scholar

[141] J. Liu, E. Lucas, A. S. Raja, et al., “Photonic microwave generation in the X- and K-band using integrated soliton microcombs,” Nat. Photonics, vol. 14, no. 8, pp. 486–491, 2020, https://doi.org/10.1038/s41566-020-0617-x.Search in Google Scholar

[142] B. Shen, L. Chang, J. Liu, et al., “Integrated turnkey soliton microcombs,” Nature, vol. 582, no. 7812, pp. 365–369, 2020, https://doi.org/10.1038/s41586-020-2358-x.Search in Google Scholar PubMed

[143] Th. Udem, R. Holzwarth, and T. W. Hänsch, “Optical frequency metrology,” Nature, vol. 416, no. 6877, pp. 233–237, 2002, https://doi.org/10.1038/416233a.Search in Google Scholar PubMed

[144] A. Schliesser, N. Picqué, and T. W. Hänsch, “Mid-infrared frequency combs,” Nat. Photonics, vol. 6, no. 7, pp. 440–449, 2012, https://doi.org/10.1038/nphoton.2012.142.Search in Google Scholar

[145] D. T. Spencer, T. Drake, T. C. Briles, et al., “An optical-frequency synthesizer using integrated photonics,” Nature, vol. 557, no. 7703, pp. 81–85, 2018, https://doi.org/10.1038/s41586-018-0065-7.Search in Google Scholar PubMed

[146] N. Singh, M. Xin, N. Li, et al., “Silicon photonics optical frequency synthesizer,” Laser Photonics Rev., vol. 14, p. 1900449, 2020. https://doi.org/10.1002/lpor.201900449.Search in Google Scholar

[147] M. Xin, N. Li, N. Singh, et al., “Optical frequency synthesizer with an integrated erbium tunable laser,” Light Sci. Appl., vol. 8, no. 1, p. 122, 2019, https://doi.org/10.1038/s41377-019-0233-z.Search in Google Scholar PubMed PubMed Central

[148] Y. Na, C.-G. Jeon, C. Ahn, et al., “Ultrafast, sub-nanometre-precision and multifunctional time-of-flight detection,” Nat. Photonics, vol. 14, no. 6, pp. 355–360, 2020, https://doi.org/10.1038/s41566-020-0586-0.Search in Google Scholar

[149] E. Lucas, P. Brochard, R. Bouchand, S. Schilt, T. Südmeyer, and T. J. Kippenberg, “Ultralow-noise photonic microwave synthesis using a soliton microcomb-based transfer oscillator,” Nat. Commun., vol. 11, no. 1, p. 374, 2020, https://doi.org/10.1038/s41467-019-14059-4.Search in Google Scholar PubMed PubMed Central

[150] S. A. Diddams, “The evolving optical frequency comb [Invited],” J. Opt. Soc. Am. B, vol. 27, no. 11, pp. B51–B62, 2010, https://doi.org/10.1364/josab.27.000b51.Search in Google Scholar

[151] T. Fortier and E. Baumann, “20 years of developments in optical frequency comb technology and applications,” Commun. Phys., vol. 2, no. 1, p. 153, 2019, https://doi.org/10.1038/s42005-019-0249-y.Search in Google Scholar

[152] S. A. Diddams, K. Vahala, and T. Udem, “Optical frequency combs: coherently uniting the electromagnetic spectrum,” Science, vol. 369, no. 6501, p. eaay3676, 2020, https://doi.org/10.1126/science.aay3676.Search in Google Scholar PubMed

[153] A. L. Gaeta, M. Lipson, and T. J. Kippenberg, “Photonic-chip-based frequency combs,” Nat. Photonics, vol. 13, no. 3, pp. 158–169, 2019, https://doi.org/10.1038/s41566-019-0358-x.Search in Google Scholar

[154] E. Obrzud, M. Rainer, A. Harutyunyan, et al., “A microphotonic astrocomb,” Nat. Photonics, vol. 13, no. 1, pp. 31–35, 2019, https://doi.org/10.1038/s41566-018-0309-y.Search in Google Scholar

[155] M. Kues, C. Reimer, J. M. Lukens, et al., “Quantum optical microcombs,” Nat. Photonics, vol. 13, no. 3, pp. 170–179, 2019, https://doi.org/10.1038/s41566-019-0363-0.Search in Google Scholar

[156] T. Herr, K. Hartinger, J. Riemensberger, et al., “Universal formation dynamics and noise of Kerr-frequency combs in microresonators,” Nat. Photonics, vol. 6, no. 7, pp. 480–487, 2012, https://doi.org/10.1038/nphoton.2012.127.Search in Google Scholar

[157] P. Del’Haye, A. Schliesser, O. Arcizet, T. Wilken, R. Holzwarth, and T. J. Kippenberg, “Optical frequency comb generation from a monolithic microresonator,” Nature, vol. 450, no. 7173, pp. 1214–1217, 2007.10.1038/nature06401Search in Google Scholar PubMed

[158] M. H. P. Pfeiffer, A. Kordts, V. Brasch, et al., “Photonic Damascene process for integrated high-Q microresonator based nonlinear photonics,” Optica, vol. 3, no. 1, pp. 20–25, 2016, https://doi.org/10.1364/optica.3.000020.Search in Google Scholar

[159] J. Sun, E. Timurdogan, A. Yaacobi, E. S. Hosseini, and M. R. Watts, “Large-scale nanophotonic phased array,” Nature, vol. 493, no. 7431, pp. 195–199, 2013, https://doi.org/10.1038/nature11727.Search in Google Scholar PubMed

[160] C. V. Poulton, M. J. Byrd, M. Raval, et al., “Large-scale silicon nitride nanophotonic phased arrays at infrared and visible wavelengths,” Opt. Lett., vol. 42, no. 1, pp. 21–24, 2017, https://doi.org/10.1364/ol.42.000021.Search in Google Scholar PubMed

[161] J. Notaros, N. Li, C. V. Poulton, et al., “CMOS-compatible optical phased array powered by a monolithically-integrated erbium laser,” J. Lightwave Technol., vol. 37, no. 24, pp. 5982–5987, 2019, https://doi.org/10.1109/jlt.2019.2944607.Search in Google Scholar

[162] S. A. Tadesse and M. Li, “Sub-optical wavelength acoustic wave modulation of integrated photonic resonators at microwave frequencies,” Nat. Commun., vol. 5, no. 1, p. 5402, 2014, https://doi.org/10.1038/ncomms6402.Search in Google Scholar PubMed

[163] D. B. Sohn, S. Kim, and G. Bahl, “Time-reversal symmetry breaking with acoustic pumping of nanophotonic circuits,” Nat. Photonics, vol. 12, no. 2, pp. 91–97, 2018, https://doi.org/10.1038/s41566-017-0075-2.Search in Google Scholar

[164] H. Tian, J. Liu, B. Dong, et al., “Hybrid integrated photonics using bulk acoustic resonators,” Nat. Commun., vol. 11, no. 1, p. 3073, 2020, https://doi.org/10.1038/s41467-020-16812-6.Search in Google Scholar PubMed PubMed Central

[165] Y. Jiang, S. Karpf, and B. Jalali, “Time-stretch LiDAR as a spectrally scanned time-of-flight ranging camera,” Nat. Photonics, vol. 14, no. 1, pp. 14–18, 2020, https://doi.org/10.1038/s41566-019-0548-6.Search in Google Scholar

[166] N. Singh, M. Xin, D. Vermeulen, et al., “Octave-spanning coherent supercontinuum generation in silicon on insulator from 1.06 μm to beyond 2.4 μm,” Light Sci. Appl., vol. 7, p. 17131, 2018, https://doi.org/10.1038/lsa.2017.131.Search in Google Scholar PubMed PubMed Central

[167] N. Singh, D. Vermulen, A. Ruocco, et al., “Supercontinuum generation in varying dispersion and birefringent silicon waveguide,” Opt. Express, vol. 27, no. 22, pp. 31698–31712, 2019, https://doi.org/10.1364/oe.27.031698.Search in Google Scholar PubMed

[168] B. Stern, X. Ji, Y. Okawachi, A. L. Gaeta, and M. Lipson, “Battery-operated integrated frequency comb generator,” Nature, vol. 562, no. 7727, pp. 401–405, 2018, https://doi.org/10.1038/s41586-018-0598-9.Search in Google Scholar PubMed

[169] Z. Dong, J. Ho, Y. F. Yu, et al., “Printing beyond sRGB color gamut by mimicking silicon nanostructures in free-space,” Nano Lett., vol. 17, no. 12, pp. 7620–7628, 2017, https://doi.org/10.1021/acs.nanolett.7b03613.Search in Google Scholar PubMed

[170] J. Proust, F. Bedu, B. Gallas, I. Ozerov, and N. Bonod, “All-dielectric colored metasurfaces with silicon mie resonators,” ACS Nano, vol. 10, no. 8, pp. 7761–7767, 2016, https://doi.org/10.1021/acsnano.6b03207.Search in Google Scholar PubMed

[171] M. Miyata, H. Hatada, and J. Takahara, “Full-color subwavelength printing with gap-plasmonic optical antennas,” Nano Lett., vol. 16, no. 5, pp. 3166–3172, 2016, https://doi.org/10.1021/acs.nanolett.6b00500.Search in Google Scholar PubMed

[172] S. J. Tan, L. Zhang, D. Zhu, et al., “Plasmonic color palettes for photorealistic printing with aluminum nanostructures,” Nano Lett., vol. 14, no. 7, pp. 4023–4029, 2014, https://doi.org/10.1021/nl501460x.Search in Google Scholar PubMed

[173] A. Kristensen, J. K. W. Yang, S. I. Bozhevolnyi, et al., “Plasmonic colour generation,” Nat. Rev. Mater., vol. 2, no. 1, p. 16088, 2016, https://doi.org/10.1038/natrevmats.2016.88.Search in Google Scholar

[174] T. Lee, J. Jang, H. Jeong, and J. Rho, “Plasmonic- and dielectric-based structural coloring: from fundamentals to practical applications,” Nano Converg., vol. 5, no. 1, p. 1, 2018, https://doi.org/10.1186/s40580-017-0133-y.Search in Google Scholar PubMed PubMed Central

[175] Y. Dong, Z. Xu, N. Li, et al., “Si metasurface half-wave plates demonstrated on a 12-inch CMOS platform,” Nanophotonics, vol. 9, no. 1, pp. 149–157, 2019, https://doi.org/10.1515/nanoph-2019-0364.Search in Google Scholar

[176] Q. Zhong, Y. Li, T. Hu, et al., “1550nm-Wavelength metalens demonstrated on 12-inch Si CMOS platform,” in 2019 IEEE 16th International Conference on Group IV Photonics (GFP), Singapore, IEEE, 2019, 1949–209X, pp. 1–2.10.1109/GROUP4.2019.8925873Search in Google Scholar

[177] J.-S. Park, S. Zhang, A. She, et al., “All-glass, large metalens at visible wavelength using deep-ultraviolet projection lithography,” Nano Lett., vol. 19, no. 12, pp. 8673–8682, 2019, https://doi.org/10.1021/acs.nanolett.9b03333.Search in Google Scholar PubMed

[178] S. Colburn, A. Zhan, E. Bayati, et al., “Broadband transparent and CMOS-compatible flat optics with silicon nitride metasurfaces [Invited],” Opt. Mater. Express, vol. 8, no. 8, pp. 2330–2344, 2018, https://doi.org/10.1364/ome.8.002330.Search in Google Scholar

[179] L. Guo, Z. Hu, R. Wan, et al., “Design of aluminum nitride metalens for broadband ultraviolet incidence routing,” Nanophotonics, vol. 8, no. 1, pp. 171–180, 2019.10.1515/nanoph-2018-0151Search in Google Scholar

[180] Z. Hu, L. Long, R. Wan, et al., “Ultrawide bandgap AlN metasurfaces for ultraviolet focusing and routing,” Opt. Lett., vol. 45, no. 13, pp. 3466–3469, 2020, https://doi.org/10.1364/ol.395909.Search in Google Scholar PubMed

[181] E. Arbabi, A. Arbabi, S. M. Kamali, Y. Horie, M. Faraji-Dana, and A. Faraon, “MEMS-tunable dielectric metasurface lens,” Nat. Commun., vol. 9, no. 1, p. 812, 2018, https://doi.org/10.1038/s41467-018-03155-6.Search in Google Scholar PubMed PubMed Central

[182] T. Roy, S. Zhang, I. W. Jung, M. Troccoli, F. Capasso, and D. Lopez, “Dynamic metasurface lens based on MEMS technology,” APL Photonics, vol. 3, no. 2, p. 021302, 2018, https://doi.org/10.1063/1.5018865.Search in Google Scholar

[183] A. She, S. Zhang, S. Shian, D. R. Clarke, and F. Capasso, “Adaptive metalenses with simultaneous electrical control of focal length, astigmatism, and shift,” Sci. Adv., vol. 4, no. 2, p. eaap9957, 2018, https://doi.org/10.1126/sciadv.aap9957.Search in Google Scholar PubMed PubMed Central

[184] M. J. Byrd, E. Timurdogan, Z. Su, et al., “Mode-evolution-based coupler for high saturation power Ge-on-Si photodetectors,” Opt. Lett., vol. 42, no. 4, pp. 851–854, 2017, https://doi.org/10.1364/ol.42.000851.Search in Google Scholar

[185] C. Xiong, W. H. P. Pernice, and H. X. Tang, “Low-loss, silicon integrated, aluminum nitride photonic circuits and their use for electro-optic signal processing,” Nano Lett., vol. 12, no. 7, pp. 3562–3568, 2012, https://doi.org/10.1021/nl3011885.Search in Google Scholar PubMed

[186] C. Xiong, W. H. P. Pernice, X. Sun, C. Schuck, K. Y. Fong, and H. X. Tang, “Aluminum nitride as a new material for chip-scale optomechanics and nonlinear optics,” New J. Phys., vol. 14, no. 9, p. 095014, 2012, https://doi.org/10.1088/1367-2630/14/9/095014.Search in Google Scholar

[187] H. Jung, R. Stoll, X. Guo, D. Fischer, and H. X. Tang, “Green, red, and IR frequency comb line generation from single IR pump in AlN microring resonator,” Optica, vol. 1, no. 6, pp. 396–399, 2014, https://doi.org/10.1364/optica.1.000396.Search in Google Scholar

[188] H. Jung and H. X. Tang, “Aluminum nitride as nonlinear optical material for on-chip frequency comb generation and frequency conversion,” Nanophotonics, vol. 5, no. 2, pp. 263–271, 2016, https://doi.org/10.1515/nanoph-2016-0020.Search in Google Scholar

[189] S. Zhu and G.-Q. Lo, “Aluminum nitride electro-optic phase shifter for backend integration on silicon,” Opt. Express, vol. 24, no. 12, pp. 12501–12506, 2016, https://doi.org/10.1364/oe.24.012501.Search in Google Scholar PubMed

[190] S. Zhu, Q. Zhong, T. Hu, et al., “Aluminum nitride ultralow loss waveguides and push-pull electro-optic modulators for near infrared and visible integrated photonics,” in Optical Fiber Communication Conference (OFC) 2019, OSA Technical Digest (Optical Society of America), San Diego, California, United States, OSA Publishing, 2019, p. W2A.11.10.1364/OFC.2019.W2A.11Search in Google Scholar

[191] C. Haffner, A. Joerg, M. Doderer, et al., “Nano–opto-electro-mechanical switches operated at CMOS-level voltages,” Science, vol. 366, no. 6467, p. 860, 2019, https://doi.org/10.1126/science.aay8645.Search in Google Scholar PubMed

Received: 2020-11-25
Accepted: 2021-01-16
Published Online: 2021-02-12

© 2021 Nanxi Li et al., published by De Gruyter, Berlin/Boston

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 28.3.2024 from https://www.degruyter.com/document/doi/10.1515/nanoph-2020-0625/html
Scroll to top button