Open Access
25 August 2020 Spatial misregistration in hyperspectral cameras: lab characterization and impact on data quality in real-world images
Gudrun Høye, Andrei Fridman
Author Affiliations +
Abstract

Hyperspectral cameras capture images where every pixel contains spectral information of the corresponding small area of the depicted scene. Spatial misregistration—differences in spatial sampling between different spectral channels—is one of the key quality parameters of these cameras, because it may have a large impact on the accuracy of the captured spectra. Spatial misregistration unifies various factors, such as differences in the position of the optical point spread function (PSF) in different spectral channels, differences in PSF size, and differences in PSF shape. Ideally, there should be no difference in spatial sampling across the spectral channels, but in any real camera, all these factors are present to some degree. Our work shows the magnitude of the spectral errors caused by these spatial misregistration factors of different magnitudes and in various combinations when acquiring hyperspectral images of real scenes. The spectral errors are calculated in Virtual Camera software where high-resolution airborne images of real-world scenes and several PSFs of different hyperspectral cameras are used as the input. The misregistration factors are simulated. Two different methods for quantifying spatial misregistration in the lab are also tested and compared using the correlation with the errors in the real-world scenes as the criterion. The results are used to suggest the best camera characterization approach that would adequately predict spatial misregistration errors and allow reliable comparison of different hyperspectral cameras to each other.

1.

Introduction

Hyperspectral cameras capture images where every pixel contains spectral information of the corresponding small area of the depicted scene. Pushbroom cameras are currently the most used type of hyperspectral cameras. Pushbroom hyperspectral cameras can be made to acquire images with high spatial and spectral resolution. When used in geology, agriculture, the food industry, defense, etc., these instruments allow to efficiently and reliably collect spatial and spectral information. As useful as these instruments are, they are not perfect, and some pushbroom hyperspectral cameras are closer to perfection than others. In this paper, we will discuss spatial misregistration—one particular imperfection that is often overlooked or misunderstood by users of hyperspectral cameras.

The paper is organized as follows. We will start with discussing the nature of spatial misregistration and the error sources that contribute to it. Then, we will briefly describe two existing methods that are designed to characterize spatial misregistration in the lab for assessment of camera performance. In order to examine the validity of the two methods, we will compare the spatial misregistration predicted by the methods with the errors in spectra when acquiring images of real scenes. We will also show how the two methods can be combined to further improve the accuracy of the lab characterization.

In order to have realistic test conditions, good control over the comparison process, and enough data for drawing the conclusions, we will simulate various camera imperfections using images of real airborne scenes and known properties of real hyperspectral cameras as input.

The discussion will be focused on hyperspectral cameras of the pushbroom type and spatial misregistration in the across-track direction. An earlier version of the manuscript was presented at SPIE Defense + Commercial Sensing 2020.1

2.

Nature of Spatial Misregistration

Keystone, point spread function (PSF) width, and shape difference all cause spatial misregistration errors.2 In real hyperspectral cameras, these error sources are present to various degrees. Figure 1 shows an example: (a) an image of a polychromatic point source on the sensor of a pushbroom hyperspectral camera and (b) the captured spectrum. Since the camera in this example has significant keystone (shift of the PSF position as a function of wavelength), as well as significantly different PSFs in different spectral channels, the captured spectrum of the point source is not correct.

Fig. 1

(a) Image of a polychromatic point source on the sensor of a pushbroom hyperspectral camera with large spatial misregistration. Only a single pixel column, representing one spatial pixel, is shown. A small bell-like curve in each spectral channel shows the shape of the corresponding PSF. Note the large keystone (PSF shift) and large differences in PSF width between different spectral channels. (b) True spectrum of the polychromatic point source and the captured spectrum of the same source. The large difference between the two spectra is caused by the large spatial misregistration of this particular hyperspectral camera.

OE_59_8_084103_f001.png

Currently, it is usual to describe one or two of the above-mentioned error sources separately in the camera specifications: keystone is usually specified,3 PSF width differences are specified much less frequently,4 whereas PSF shape differences are almost never shown. Each of these error sources leads to spatial misregistration errors in the acquired data: for a given spatial pixel the energy is not collected from exactly the same area in all spectral channels—instead, the area is somewhat different in different spectral channels. However, the size of these different error sources cannot be compared directly. We will demonstrate this with an example. Assume two cameras with the following specifications.

  • - Camera 1 has nearly identical PSF width and shape across the camera’s spatial and spectral range, but the keystone is 0.1 pixel.

  • - Camera 2 has almost perfect keystone correction, but the PSF full-width at half-maximum (PSF FWHM) varies between 0.5 and 0.7 pixel across the camera’s spectral range.

Although both keystone and PSF differences lead to errors in the acquired spectra, it would be difficult in this example to choose the camera with the smallest errors in the acquired data, since the given numbers cannot be directly compared. Is a keystone of 0.1 pixel (camera 1) better or worse than a PSF FWHM variation from 0.5 to 0.7 pixel (camera 2) with respect to spatial misregistration errors? Fortunately, it turns out that it may be possible to express all three error sources discussed above in a single number—making it possible to directly compare different cameras with respect to spatial misregistration errors. In the next section, we will present two methods for characterizing spatial misregistration, which do precisely this.

3.

Two Methods for Characterizing Spatial Misregistration

In this paper, we discuss two different methods for characterizing spatial misregistration in hyperspectral cameras that are intended for assessment of camera performance. Both methods combine the effects of keystone and PSF width and shape differences into a single number that describes the spatial misregistration. The first method—which we will refer to as method 1—is an established method for quantifying coregistration, known since the 1990s.5 In 2012, the method was suggested for inclusion in a possible future standard for hyperspectral cameras.6 The second method, which we will refer to as method 2, was proposed more recently (2015) to simplify the measurement procedure as well as to provide a more intuitive and straightforward way to calculate the expected misregistration errors for a hyperspectral camera.7 Since the data acquired by a hyperspectral camera may be postprocessed in order to reduce misregistration,3,4,8 both methods should be applied to the final hyperspectral datacube.

We will give a short summary of the two methods below. For simplicity, the methods will be explained for the one-dimensional case only, but both methods are also suitable for use in two dimensions. The methods use point source measurements for the calculations, and both methods can be explained in terms of the sampling point spread function (SPSF), which will be briefly described below.

3.1.

Sampling Point Spread Function

The SPSF combines the effects of the optical PSF and the pixel response function (assuming that the PSF is approximately constant for a small part of the camera’s field of view around the pixel of interest). As an example, let us assume a sensor pixel that is uniformly sensitive to light inside its borders and completely insensitive to light outside the borders. Then considering just one spatial dimension, the pixel response function will look as shown in Fig. 2(a).

Fig. 2

Pixel response function shown (a) alone, (b) together with the PSF (blue), and (c) together with the corresponding SPSF (green).

OE_59_8_084103_f002.png

The SPSF in the across-track direction can be measured by first placing a point source well outside the pixel and then moving the point source in small steps across the field of view, i.e., in the across-track direction, while measuring the signal in the pixel for each position of the point source.7 When the point source is positioned far from the pixel, only a small percentage of the energy will hit the pixel due to the shape of the PSF. When the point source is positioned closer to the pixel—or especially inside the pixel—a larger portion of the energy will hit the pixel. The signal registered by the pixel as a function of point source position is the SPSF. Here is an example: when moving the PSF shown in Fig. 2(b) across the camera’s field of view and measuring the signal in a pixel that has a response function as shown in Fig. 2(a), the resulting SPSF will be as shown in Fig. 2(c).

The SPSF, when normalized so that the area under the curve is equal to 1, tells us how large fraction of the light ends up in the pixel for a given point source position. In the example in Fig. 2(c), 30% of the light ends up in the pixel when the point source is at some distance outside the pixel (x=0.8), whereas 85% of the light ends up in the pixel when the point source is positioned at the center of the pixel (x=0). A more general definition of the SPSF can be found in previously published literature,5,6,9 as well as information about how the SPSF could be measured in two dimensions.

In a hyperspectral camera with perfect spatial coregistration, the SPSFs will be identical across the entire wavelength range for any given spatial pixel. However, in a real camera, the SPSFs will be somewhat different in each spectral channel: they will be shifted somewhat relative to each other due to keystone, they will have somewhat different width at half-maximums, and the finer details of their shapes will also be somewhat different.

3.2.

Method 1

Method 1 calculates the misregistration between two channels based on the measured SPSFs.6 This gives an upper bound on the absolute radiometric signal error for extended sources.10 Figure 3(a) shows an example of the normalized SPSFs for two channels i (green) and j (blue). The two SPSFs have different shapes and widths and are shifted compared to each other, which leads to misregistration. For instance, when the point source is at the left border of the pixel (x=0.50), about 72% of the light in channel i (green) ends up in the pixel, whereas only 22% of the light in channel j (blue) ends up in the same pixel.

Fig. 3

(a) Example of the normalized SPSFs for two channels i (green) and j (blue). (b) Border between two materials A and B. The border is positioned at the crossing point between the two SPSFs.

OE_59_8_084103_f003.png

In order to calculate the misregistration, method 1 considers a scene consisting of two materials A and B [see Fig. 3(b)], where A is bright and B is completely dark. The amount of light from material A that ends up in the pixel of interest is in this case proportional to the integrals of the green and blue curves—in the area where material A is present—for channels i and j, respectively. The difference between these two integrals, i.e., the area Δw in Fig. 3(b), is said to be the misregistration.5,6

Eq. (1)

Δw=12x|fj(x,y)fi(x,y)|dx,
where fi(x) is the SPSF for channel i and the integral is taken across the entire length of the across-track image line.

The misregistration is always calculated for the case where the border is at the crossing point between the two SPSFs [as shown in Fig. 3(b)], since this is the position of the border that would give the largest error. The maximum misregistration for a given spatial pixel is, for this method, defined as the largest misregistration measured across all pairs of spectral channels,6 whereas the mean misregistration is calculated as the average across all channel pairs.

3.3.

Method 2

Method 2 calculates the misregistration errors directly from the point source measurements without first forming the SPSFs.7 The method is based on a very basic principle: simply determine “how much of the energy collected by the hyperspectral camera ends up in the correct pixel in the final data cube.” For each point source position within the pixel, the misregistration is calculated as the relative difference between the energy measured in a given spectral channel [solid line in Fig. 1(b)] and the average energy across all spectral channels [dashed line in Fig. 1(b)]. The maximum misregistration is defined as the largest measured misregistration across all point source positions, whereas the mean misregistration is calculated as the average across all point source positions.

For easy comparison with method 1, method 2 can also be explained in the framework of SPSFs. Key differences between the two methods can then better be understood.

Figure 4 shows how misregistration for method 2 is defined and calculated in the SPSF framework. Green and blue curves represent the normalized SPSFs for channels i and j, respectively, as before. The black curve is the mean of the green and the blue curve and is used as a reference. For a given point source position x, the black curve shows the fraction of light that would be recorded in the pixel of interest in each channel if there were no keystone or PSF differences in the system (i.e., zero misregistration). For instance, for position x=0.20 approximately 60% of the light in a given channel would end up in the pixel of interest in the ideal case.

Fig. 4

Misregistration as defined by method 2, explained in the SPSF framework for easy comparison with method 1. The misregistration calculations for method 2 are based on the assumption of a point source. Only positions inside the pixel of interest (white area in the figure) are used when calculating the misregistration.

OE_59_8_084103_f004.png

In reality, the pixel of interest will get more light in some spectral channels and less in others. In the example above, for position x=0.20, the pixel of interest gets too much light in channel i (green) and too little light in channel j (blue). How large the deviation is (ΔSxi, green vertical line) compared to the ideal value (Sx, black vertical line) tells us how large the errors in the system will be. The misregistration (for channel i and position x) is, therefore, defined as the ratio ΔSxi/Sx. This ratio is calculated for all channels and for different positions within the pixel of interest in order to determine the maximum and mean misregistrations. Note that only positions within the pixel of interest (white area in the figure) are used for the calculations, thereby avoiding using the tails of the SPSFs where the signal may be very low and difficult to measure.

4.

Simulations

We will, in the next section, examine the impact of the previously discussed misregistration components on real-world data. We will also assess how well the two characterization methods quantify misregistration, i.e., how well the lab measurements correlate with misregistration errors in real scenes. In order to do this, we have simulated different cameras with various degrees of camera imperfections. The simulations focus on spatial misregistration in the across-track direction. Applicability of the results for spatial misregistration in the along-track direction, as well as spectral misregistration, is discussed briefly in Secs. 5.6 and 6. The simulations were done in Virtual Camera software11 using images of real scenes and PSFs of real hyperspectral cameras as input. The details of the simulations will be described below.

4.1.

Simulated Misregistration Components (i.e., Camera Imperfections)

The following camera imperfections were simulated:

  • - keystone (PSF position changing as a function of wavelength),

  • - PSF width changing as a function of wavelength, and

  • - PSF shape changing as a function of wavelength.

The performance of 59 different cameras was simulated. Each camera has a total of 21 spectral channels where one or more imperfections are present to various degrees. All imperfections and their magnitudes in the tested cameras are described below. For details on the PSFs used for the simulations, see Figs. 5 and 6. Table 1 gives an overview of the imperfections in all tested cameras. Image noise (photon noise, etc.) is not included in the simulations, since we want to assess the impact of misregistration errors isolated from other possible sources of error in the data.

Fig. 5

Different real PSFs used in the simulations. The PSF width is expressed as the MTF value at the Nyquist frequency. (a)–(d) Symmetric PSFs with varying widths, (e), (f) two PSFs with the same width (MTF=0.25) but distinctly different shapes (one very asymmetric and the other quite symmetric) at two different wavelengths λ1 and λ101, and (g), (h) two PSFs with the same width (MTF=0.50) but, again, two distinctly different shapes at two different wavelengths.

OE_59_8_084103_f005.png

Fig. 6

Example of morphed PSFs across 101 spectral channels. (a) The first spectral channel with a PSF based on the PSF in Fig. 5(e) (MTF=0.25, λ1), (b)–(e) PSFs morphed between the first and the last spectral channel, and (f) the last spectral channel with a PSF based on the PSF in Fig. 5(f) (MTF=0.25, λ101).

OE_59_8_084103_f006.png

Table 1

Main characteristics of the cameras that are simulated.

Group #Camera #PSF (expressed as MTF at Nyquist frequency)Keystone (% of a pixel)
011 to 50.5010 to 50
026 to 100.25 to 0.700
0311 to 150.25 to 0.7010 to 50
0416 to 200.25 (λ1 to λ101)0
0521 to 250.25 (λ1 to λ101)10 to 50
0626 to 300.50 (λ1 to λ101)0
0731 to 350.50 (λ1 to λ101)10 to 50
0836 to 400.25 to 0.7010
0941 to 450.25 (λ1 to λ101)10
1046 to 500.50 (λ1 to λ101)10
1151 to 550.25 to 0.7030
1256 to 600.25 (λ1 to λ101)30
1361 to 650.50 (λ1 to λ101)30
The cameras are divided into 13 groups with 5 cameras in each group. When the cameras in a group have varying keystones, the first camera in the group has the smallest keystone and the last camera has the largest keystone. For instance, in group 7, camera 31 has 10% keystone and camera 35 has 50% keystone. The same applies for PSF width and shape variations. Note that the following six pairs of cameras are identical; 11/36, 21/41, 31/46, 13/53, 23/58, 33/63, so that there is a total of 59 different cameras.

4.1.1.

Keystone

The keystone is modeled as an offset in the PSFs’ spatial position that varies linearly across all spectral channels. For example, a keystone of 10% (of a pixel) means that the spatial offset is 0.05  pixel in spectral channel #1, 0 pixels in spectral channel #11, and +0.05  pixel in spectral channel #21. Cameras with keystones from 10% to 50% were examined. In a group of five cameras with varying keystones (Table 1), the first camera will have the smallest keystone (10%) and the fifth camera will have the largest keystone (50%). The second, third, and fourth cameras will have keystones of 20%, 30%, and 40%, respectively.

4.1.2.

PSF width and shape changes across spectral channels

The PSFs used in the simulations are based on real PSFs of different shapes and widths, see Fig. 5. For some cameras, the PSF shape and width are kept constant across all spectral channels. For other cameras, either the width or the shape is varying across spectral channels. The varying width or shape of the PSF is obtained by selecting one PSF for the first spectral channel and a second PSF for the last spectral channel, then morphing between the two PSFs for the channels between. This approach allows to have PSF sets that change across the spectral range in a controlled way, while addressing the misregistration factor being tested: for example, gradually increasing the PSF asymmetry across the spectral range while keeping the effective width of the PSF approximately constant.

For the simulations, the PSFs are morphed across 101 spectral channels, see Fig. 6. Then for a given camera, 21 of the channels are selected. In a group of five cameras (Table 1), the first camera will use spectral channels [41, 42, …, 51, …, 60, 61], the second camera will use spectral channels [31, 33, …, 51, …69, 71], the third camera will use spectral channels [21, 24, …, 51, …78, 81], the fourth camera will use spectral channels [11, 15, …, 51, …87, 91], and the fifth camera will use spectral channels [1, 6, …, 51, …96, 101]. The first camera will then have the smallest PSF variation and the fifth camera will have the largest PSF variation.

4.1.3.

Combining different misregistration factors

Many of the tested cameras have more than one misregistration factor present. For example, for some cameras, both the PSF position (i.e., keystone) and the PSF width may change across spectral channels. For other cameras, keystone is combined with PSF shape differences.

4.2.

Tested Cameras

The tested cameras can be divided into 13 groups. Each group explores a single misregistration factor or a particular combination of such factors. There are five cameras in each group. The misregistration factor (or factors) of interest gradually changes from the smallest value in the first camera of a group to the largest value in the fifth camera of the same group. Table 1 lists all tested cameras with a brief description of all tested misregistration factors.

4.3.

Test Scenes

Two airborne scenes were chosen as camera input, see Fig. 7. The scenes have many objects of various sizes and contrast. If spatial misregistration is present in a hyperspectral camera, these scene features will cause errors in the acquired spectra.

Fig. 7

Two airborne scenes used as input for the simulations: (a) scene A and (b) scene B. Scene A contains both manmade and natural objects. Scene B contains only natural objects. The images were acquired by HySpex Mjolnir V-1240 mounted in a gimbal on an octocopter. One of the near-infrared channels was used for the simulations. The images are radiometrically calibrated, but not georeferenced.

OE_59_8_084103_f007.png

The first scene (scene A) has both manmade and natural objects. The features in this scene that are expected to cause large errors in the acquired data due to spatial misregistration are:

  • - high-contrast borders between relatively large areas of low contrast and

  • - small high-contrast objects.

The second scene (scene B) has natural objects only. The features expected to cause large errors here are mainly small high-contrast objects.

4.4.

Recording the Images

The input scenes are processed by the Virtual Camera software. A single spectral channel in the input scene image is used for all 21 channels in the simulated camera, i.e., all input spectral channels are identical. This approach ensures that we have precise knowledge about all spectra in the scene and keeps the input data clean of any misregistration present in the original hyperspectral data. Keystone and optical blur (PSF) are then added to the input, and the resulting image is recorded into the final datacube.

Note that the image used as input scene contains 7 times more pixels in the across-track direction than the recorded image. This allows to simulate subpixel details (across-track) in the scene that is being depicted.

4.5.

Errors in the Recorded Images

Due to spatial misregistration (keystone and PSF differences), there will be errors in the recorded spectra. We want to estimate the size of these errors.

For a given spatial pixel, the true spectrum is a straight horizontal line, as shown in Fig. 8(a). This is because the input scene is identical in all spectral channels. How high or low this line lies depends on the content of that scene pixel. The corresponding true spectrum in the final data cube (also a horizontal line) may lie at a somewhat different height due to optical blur that causes leakage of the content of neighboring scene pixels into the pixel of interest [Fig. 8(b), red line]. This difference in height does not reflect an actual error in the spectrum. The recorded spectrum is still correct, i.e., the spectral integrity is preserved. However, due to the presence of keystone and PSF variations across spectral channels, the actual value recorded in different channels [red dots in Fig. 8(b)] will deviate from the true value (red line). This deviation represents actual errors in the recorded spectrum.

Fig. 8

Example of (a) true spectrum (blue line) in input scene versus (b) true (red line) and recorded (red dots) spectrum in the final data cube.

OE_59_8_084103_f008.png

The relative error ΔEmi for spatial pixel #m in spectral channel #i is given by

Eq. (2)

ΔEmi=EmiE¯mE¯m,
where Emi is the energy recorded in spatial pixel #m in spectral channel #i and E¯m is the mean energy recorded in spatial pixel #m given by

Eq. (3)

E¯m=1Ii=1IEmi,
where I=21 is the number of spectral channels. The mean value E¯m is the true value, i.e., the value that would have been recorded in every spectral channel for pixel #m if there was no misregistration (keystone or PSF variations) in the system.

We can now calculate the standard deviation of the relative errors ΔEmstd for pixel #m

Eq. (4)

ΔEmstd=1Ii=1I(ΔEmi)2,
as well as the maximum relative error ΔEmmax for pixel #m

Eq. (5)

ΔEmmax=12·EmmaxEmminE¯m,
where Emmin and Emmax are the minimum and maximum energies recorded across all spectral channels for pixel #m.

For each camera, we have determined the maximum relative error ΔEmax among all pixels in the image

Eq. (6)

ΔEmax=max(ΔEmmax),m=1,2,,M,
where M is the total number of pixels in the image. We have also calculated the mean of the standard deviation of the relative errors ΔE¯ across all pixels in the image

Eq. (7)

ΔE¯=1Mm=1MΔEmstd.

4.6.

Simulated Point Source Measurements for Camera Characterization

The two characterization methods 1 and 2 require point source measurements in order to predict misregistration errors for a camera. We have, therefore, simulated point source measurements for all the cameras listed in Table 1.

The point source measurements are performed by moving a (simulated) point source in small steps across the pixel of interest in the across-track direction. In the simulations, we use 21 steps per pixel. Keystone and PSF variations across spectral channels are added to the input before recording into the final data cube. The measurements start and end well outside the pixel of interest in order to capture the whole SPSF. The SPSF will be used as a basis for the misregistration calculations by methods 1 and 2. Note that method 1 requires that the whole SPSF is captured, whereas method 2 only requires the part of the SPSF that is inside the pixel of interest to be captured.

4.7.

Misregistration: Method 1

Method 1 is based on point source measurements and the calculations are performed directly on the resulting SPSFs.6 The SPSFs are normalized to 1 in each channel, i.e., for every point source position, the total energy recorded in a given channel is normalized to 1. When referring to recorded energy in the equations below, this is always the underlying assumption.

The misregistration ΔEmij between spectral channels #i and #j for spatial pixel #m (the pixel of interest) is calculated from

Eq. (8)

ΔEmij=12k=1N|EmkiEmkj|·Δx.
Here N is the number of point source positions where Emki, Emkj>0, Emki is the energy recorded in spatial pixel #m in spectral channel #i when the point source is at position #k, and Δx=1/K where K=21 is the total number of point source positions per pixel. Note that Emki is the same as (is equal to) the SPSF value in spectral channel #i for point source position #k, when pixel #m is the pixel of interest. The sum is taken over all point source positions where the SPSF has a value larger than zero.

The maximum misregistration ΔEmmax for pixel #m can then be calculated from

Eq. (9)

ΔEmmax=max(ΔEmij),
across all band pairs (i,j), whereas the mean misregistration ΔE¯m for pixel #m is taken as the mean across all band pairs and can be calculated from

Eq. (10)

ΔE¯m=(1v=1I1(Iv))·i=1I1j=i+1IΔEmij,
where I=21 is the total number of spectral channels.

4.8.

Misregistration: Method 2

Method 2 is based on point source measurements and the calculations can be performed directly on the resulting SPSFs.7 The SPSFs are normalized to 1 in each channel, i.e., for every point source position the total energy recorded in a given channel is normalized to 1. When referring to recorded energy in the equations below, this is always the underlying assumption.

The misregistration ΔEmki for spatial pixel #m (the pixel of interest) in spectral channel #i when the point source is at position #k is given by

Eq. (11)

ΔEmki=EmkiE¯mkE¯mk,
where Emki is the energy recorded in spatial pixel #m in spectral channel #i when the point source is at position #k, and E¯mk is the mean energy recorded in spatial pixel #m when the point source is at position #k, given by

Eq. (12)

E¯mk=1Ii=1IEmki,
where I=21 is the number of spectral channels. Note that Emki is the same as (is equal to) the SPSF value in spectral channel #i for point source position #k when pixel #m is the pixel of interest.

We can now calculate the standard deviation of the relative errors ΔEmkstd for pixel #m when the point source is at position #k

Eq. (13)

ΔEmkstd=1Ii=1I(ΔEmki)2,
as well as the maximum relative error ΔEmkmax for pixel #m when the point source is at position #k

Eq. (14)

ΔEmkmax=12·EmkmaxEmkminE¯mk,
where Emkmin and Emkmax are the minimum and maximum energies recorded across all spectral channels for pixel #m when the point source is at position #k.

We can then calculate the maximum misregistration ΔEmmax for pixel #m (across all point source positions within the pixel)

Eq. (15)

ΔEmmax=max(ΔEmkmax),k=1,2,,K,
as well as the mean of the standard deviation for the misregistration ΔE¯m for pixel #m (across all point source positions within the pixel)

Eq. (16)

ΔE¯m=1Kk=1KΔEmkstd,
where K=21 is the number of point source positions within the pixel of interest.

5.

Simulation Results and Analysis

In this section, we will present and discuss the results from the simulations.

5.1.

Maximum Misregistration Errors in Real Scenes

We will first verify that each of the spatial misregistration components (keystone, PSF width, and PSF shape) causes errors in the acquired spectra and check how large these errors are. We will look at the relative errors and, for now, consider the maximum relative error for each camera in a given scene.

Let us consider the five cameras of group 1 in Table 1. These five cameras have keystones of 0.1 to 0.5 pixel, respectively. The PSF widths and shapes correspond to modulation transfer function (MTF) = 0.5 at the Nyquist frequency and are identical in all five cameras and in all spectral channels, i.e., the only error source is keystone. The results of the simulations for these five cameras are presented in Fig. 9. The red curve shows how the magnitude of maximum errors in the acquired spectra depends on the camera’s keystone value. Naturally, a larger keystone causes larger maximum spectral errors.

Fig. 9

Maximum errors in scene A caused by keystone (red curve, five cameras from group 1 in Table 1 with progressively increasing keystone); by differences in PSF width (blue curve, five cameras from group 2 in Table 1 with progressively increasing PSF width differences, see also Table 2 for the PSF width differences in each of the five cameras, expressed as MTF variations at the Nyquist frequency); by differences in PSF shape (green curve, five cameras from group 6 in Table 1 with progressively increasing PSF shape differences).

OE_59_8_084103_f009.png

Let us consider a different example with five cameras that have no keystone, and where all five cameras have identical PSF shapes in all spectral channels. However, the PSF width in each camera changes across all spectral channels (group 2 in Table 1). Table 2 shows how much the MTF at Nyquist frequency changes across the spectral range for each camera. The blue curve in Fig. 9 shows how these differences in MTF (i.e., in PSF width) correspond to the maximum errors in the acquired spectra.

Table 2

PSF width changes across the spectral range for five cameras in group 2 (Table 1). The PSF width values are expressed as MTF values. A larger MTF value corresponds to a narrower PSF.

Camera #678910
MTF variations0.43 to 0.520.385 to 0.5650.34 to 0.610.295 to 0.6550.25 to 0.70

In the third example, the five tested cameras have no keystone and a constant PSF width corresponding to MTF=0.5 at the Nyquist frequency. However, for each camera, the PSF shape varies across spectral channels (group 6 in Table 1). In camera 30, the variations are the largest: Figs. 5(g)5(h) show the two SPSFs with the largest difference in shape for this camera. PSFs for all other spectral channels are derived by morphing between these two PSFs [see Figs. 6(a)6(f) for an example of such morphing]. The PSF shape variations in the other four cameras are smaller, as described in Sec. 4.1. The results of this simulation are presented in Fig. 9, where the green curve shows how the magnitude of the maximum errors in the acquired spectra depends on the differences in the PSF shape.

Based on the results above, it seems clear that misregistration is potentially an important source of error in hyperspectral data that may easily exceed the contribution of other error sources such as photon noise.

5.2.

Maximum Misregistration Errors in Real Scenes versus Camera Characterization of Misregistration

We will now compare maximum misregistration errors in real scenes to camera characterization by methods 1 and 2. Spectral errors in real scenes—caused by misregistration factors of different types and magnitude—were investigated for 59 cameras. The same 59 cameras were also characterized by methods 1 and 2. We can now evaluate how well each characterization method predicts the errors caused by misregistration when acquiring spectra of real scenes.

Note that the 59 simulated cameras have misregistration errors that are different not only in magnitude but also in nature (see Fig. 9 for 15 camera examples). In a nutshell, we need to find a way to evaluate two lab characterization methods by comparing their results with 13 datasets (each containing five cameras), where each dataset has errors of different magnitudes and the nature of the errors is different between different datasets. This does not appear to be an easy task. How can we do this comparison without resorting to multidimensional graphs?

First, we place points on the graph with each point corresponding to the maximum relative error in the spectral data captured from a real scene for a given camera. There are 59 cameras, so there are 59 points. The cameras are ordered according to the magnitude of the maximum error: the camera with the smallest maximum error is the first from the left, the camera with the largest maximum error is farthest to the right, with all other cameras in between—ordered according to the magnitude of their maximum errors. Now we can place misregistration for each camera as calculated by method 1 in the same camera order that was used for the errors in the real scene, and the same can be done for method 2. Figure 10 shows the result for scene A, where any given index along the horizontal axis shows (for the same camera):

  • - maximum error in the acquired spectra in scene A (red),

  • - misregistration according to method 1 (blue), and

  • - misregistration according to method 2 (green).

Fig. 10

Maximum errors in scene A. The cameras are ordered according to the magnitude of their maximum error in scene A (index #1 represents the camera with the smallest maximum error, whereas index #59 represents the camera with the largest maximum error). The red curve shows the maximum error for each of the 59 tested cameras when depicting scene A. The blue curve shows the maximum error of the same 59 cameras as predicted by method 1. The green curve shows the maximum error of the same 59 cameras as predicted by method 2.

OE_59_8_084103_f010.png

In order to reliably predict the relative performance between different cameras, the misregistration curve calculated by a given characterization method should follow the red curve, i.e., each given point of the method curve should be higher than the adjacent point to the left and lower than the adjacent point to the right. If this is the case, the method is good at predicting a camera’s performance on real data relative to other cameras. We see that, although there are noticeable deviations from this criterion, method 1 and method 2 follow the shape of the red curve reasonably well. Method 2 has the advantage of predicting the magnitude of the maximum error in the real data more accurately.

Let us check the two characterization methods on scene B (Fig. 11). We see that the conclusions are still valid: both methods follow the red curve reasonably well with method 2 better predicting the magnitude of the maximum errors for each camera. Note, however, that for scene B, method 1 creates a smoother curve that better follows the shape of the red curve.

Fig. 11

Maximum errors in scene B. The cameras are ordered according to the magnitude of their maximum error in scene B (index #1 represents the camera with the smallest maximum error, while index #59 represents the camera with the largest maximum error). The red curve shows the maximum error for each of the 59 tested cameras when depicting scene B. The blue curve shows the maximum error of the same 59 cameras as predicted by method 1. The green curve shows the maximum error of the same 59 cameras as predicted by method 2.

OE_59_8_084103_f011.png

At first glance, it might seem that looking at the maximum error is not a reliable approach, because such an error would occur in a single specific location for a given scene and this error could have more or less random value. However, this turns out not to be the case. We have looked into the location and magnitude of the largest errors in the two scenes. Regarding the location of maximum errors, we find that the maximum errors occur in different locations of a given scene—depending on the nature of the camera’s spatial misregistration. Regarding the actual values of the largest errors, we find that the five largest errors (for a given camera) occur in different locations of a scene for different cameras. Further, for a given camera, we find that the five largest errors are very similar to each other in magnitude. A typical difference between the largest and the fifth largest error (for a given camera and scene) was found to be less than 3% of the error value, whereas the maximum observed difference was just above 11% of the error value.

Note that the two methods look at different scene features as sources of the maximum errors. Method 1 considers a sharp border between two different materials to give the largest error (Sec. 3.2), whereas method 2 considers a small (smaller than a pixel) bright object to give the maximum error (Sec. 3.3). In the two real scenes analyzed in this work, both the borders and the small (subpixel sized) objects were found to cause the largest errors—depending on the nature of spatial misregistration in the camera.

The cameras that we have tested so far have one thing in common: their optics is reasonably sharp with MTF at the Nyquist frequency of 0.25 in the worst case (0.5 in many other cases). This is typical for optics that is designed to retrieve as much information as possible from the scene when using a given sensor.12 However, there are cameras where the resolution of the optics is significantly lower than the resolution of the sensor. We will, therefore, test one more set of five cameras where the PSF is so large that the MTF reaches 0 approximately at the Nyquist frequency [or, more precisely, at 0.95 of the Nyquist frequency, see Fig. 5(d) with the corresponding PSF]. The PSF size and shape will be identical for all cameras in all channels, whereas the keystone will be 10%, 20%, 30%, 40%, and 50% (of a pixel) in each camera, respectively. Let us now add errors from these five cameras to the graph (Fig. 12). This brings the total number of tested cameras to 64.

Fig. 12

Maximum errors: (a) scene A and (b) scene B. The cameras are ordered according to the magnitude of their maximum error in scene A and scene B, respectively (index #1 represents the camera with the smallest maximum error, whereas index #64 represents the camera with the largest maximum error). The red curve shows the maximum error for each of the 64 tested cameras when depicting the corresponding scene. The blue curve shows the maximum error of the same 64 cameras as predicted by method 1. The green curve shows the maximum error of the same 64 cameras as predicted by method 2.

OE_59_8_084103_f012.png

Method 1 (blue curve) still follows the shape of the red curve (real scene) reasonably well, but this is no longer the case for method 2. Although method 2 is better at predicting the maximum errors for sharper cameras, it clearly underestimates the misregistration errors in the five blurry cameras [indices 5, 14, 26, 34, and 44 in Fig. 12(a); indices 5, 16, 24, 37, and 44 in Fig. 12(b)]. Method 1, therefore, seems to more reliably predict the relative performance between different cameras when more blurry cameras are also considered. In other words, if a camera has smaller maximum misregistration according to method 1, then this camera is likely to also have smaller maximum misregistration errors in an image of a real scene. However, method 1 consistently underestimates the magnitude of the maximum errors. In the next section, we will show how the two methods can be combined in order to optimize the ability to predict both the magnitude of the maximum misregistration errors and the camera performance relative to other cameras.

5.3.

Combining the Two Methods for more Accurate Camera Characterization

It is possible to improve the lab characterization of spatial misregistration by combining the two characterization methods. Note that even though method 2 may require fewer measurements for camera characterization, both methods can easily use the same measurement set (see Sec. 3 and Refs. 6 and 7). Below we will look at three possible approaches.

5.3.1.

Approach 1

Approach 1 combines the more accurate prediction of maximum errors for sharper cameras by method 2 with the more accurate prediction of maximum errors for blurry cameras by method 1. For a given camera, the misregistration values calculated by each method are compared and the largest value of the two is considered to be that camera’s misregistration. Figure 13 shows misregistration errors calculated by method 1 (blue) and approach 1 (green) compared to errors in a real scene (red). We see that the errors of blurry cameras are still underestimated, but not as much as when method 2 is used alone.

Fig. 13

Maximum errors: (a) scene A and (b) scene B. The cameras are ordered according to the magnitude of their maximum error in scene A and scene B, respectively (index #1 represents the camera with the smallest maximum error, whereas index #64 represents the camera with the largest maximum error). The red curve shows the maximum error for each of the 64 tested cameras when depicting the corresponding scene. The blue curve shows the maximum error of the same 64 cameras as predicted by method 1. The green curve shows the maximum error of the same 64 cameras as predicted by approach 1.

OE_59_8_084103_f013.png

5.3.2.

Approach 2

We have previously discussed how method 1 seems to consistently underestimate the maximum misregistration errors. However, if the misregistration is always underestimated by the same percentage, then the misregistration prediction by method 1 could be improved by multiplying the calculated misregistration value by a given constant factor. For scene A, a factor of 1.25 seems to work well, see Fig. 14(a). It is, however, a necessary requirement that the same factor can be applied also to other scenes. Figure 14(b) shows the results when a factor 1.25 is used for scene B. Although this is a significantly different scene (as discussed in Sec. 4.3), we see that the factor 1.25 works well here, too. Results for approach 1 are also shown for comparison. Approach 2 (blue) seems to work better than approach 1 (green).

Fig. 14

Maximum errors: (a) scene A and (b) scene B. The cameras are ordered according to the magnitude of their maximum error in scene A and scene B, respectively (index #1 represents the camera with the smallest maximum error, whereas index #64 represents the camera with the largest maximum error). The red curve shows the maximum error for each of the 64 tested cameras when depicting the corresponding scene. The green curve shows the maximum error of the same 64 cameras as predicted by approach 1. The blue curve shows the maximum error of the same 64 cameras as predicted by approach 2.

OE_59_8_084103_f014.png

As previously noted, method 1 fits data well with respect to comparing cameras, but not for the actual size of observed maximum errors. If a camera user would like to know the approximate magnitude of the expected maximum errors, it may be useful to know that this number can be found by multiplying the result by the factor 1.25.

5.3.3.

Approach 3

In approach 3, we compare the misregistration values calculated by method 2 and approach 2, and choose the largest value of the two to be the measured misregistration. Figure 15 shows the results. Results for approach 2 (light blue) are also included in the figure for comparison. Approach 3 (black) seems to work well for all tested cameras on both scenes and gives the best results of all three approaches. The magnitude of the maximum errors is now predicted reasonably well, and the misregistration curve also follows the shape of the red curve (errors in the scene) quite well.

Fig. 15

Maximum errors: (a) scene A and (b) scene B. The cameras are ordered according to the magnitude of their maximum error in scene A and scene B, respectively (index #1 represents the camera with the smallest maximum error, whereas index #64 represents the camera with the largest maximum error). The red curve shows the maximum error for each of the 64 tested cameras when depicting the corresponding scene. The light blue curve shows the maximum error of the same 64 cameras as predicted by approach 2. The black curve shows the maximum error of the same 64 cameras as predicted by approach 3.

OE_59_8_084103_f015.png

5.4.

Mean Misregistration Errors in Real Scenes

So far, we have been discussing maximum errors—the errors that occur when a very high-contrast feature (a border or a small subpixel sized object) is placed in the worst position relative to the pixel center. When looking at the maximum misregistration error, we consider just two spectral channels (the two channels with the largest misregistration error between them). However, all other channels will have smaller errors. Therefore, in addition to knowing the maximum misregistration, it may be useful to know the mean misregistration when errors in all spectral channels and for various positions of high-contrast scene features are considered.

We will first check the magnitude of the errors in the acquired spectra, which are caused by each of the spatial misregistration components (keystone, PSF width, and PSF shape). Just as we did in Sec. 5.1 for the maximum errors, we will use the same 15 cameras and look at the relative errors. However, this time we will consider the mean relative error for each camera in a given scene. Figure 16 shows the results for scene A.

Fig. 16

Mean errors in scene A caused by keystone (red curve, five cameras from group 1 in Table 1 with progressively increasing keystone); by differences in PSF width (blue curve, five cameras from group 2 in Table 1 with progressively increasing PSF width differences, see also Table 2 for the PSF width differences in each of the five cameras, expressed as MTF variations at the Nyquist frequency); by differences in PSF shape (green curve, five cameras from group 6 in Table 1 with progressively increasing PSF shape differences).

OE_59_8_084103_f016.png

We can clearly see that all three misregistration components affect the mean misregistration error in the spectra when acquiring an image of a real scene. The mean errors are significantly smaller than the maximum errors (Fig. 9). This happens for several reasons. First, not all spectral channels have equally large misregistration factors (keystone and PSF differences). Second, many scene features have low contrast, and therefore, cause relatively small errors. Finally, various scene features are positioned randomly relative to the pixel grid and the resulting misregistration errors are smaller for some of the positions than for others.

5.5.

Mean Misregistration Errors in Real Scenes versus Camera Characterization of Misregistration

We will now compare the mean misregistration predicted by methods 1 and 2 with the mean misregistration errors in the two scenes. This will be done for 64 different cameras—similar to what we did in Sec. 5.2 for maximum misregistration. Figure 17 shows the results for both scenes. The red curve shows the mean misregistration errors in the scene, whereas the blue and green curves show the mean misregistration measured by methods 1 and 2, respectively. The data are arranged in the same way as before: the cameras are ordered according to the magnitude of the mean errors, where the camera with the smallest mean error is the first from the left, and the camera with the largest mean error is the farthest to the right. We see that the red curve is much lower than the blue and green curves for the two characterization methods—for both scenes. This makes sense since the scenes have both high- and low-contrast features, whereas the two methods only consider high-contrast features. What is important here is how well the curves for the two characterization methods follow the shape of the red curve for the errors in the scene, i.e., for a good characterization method, a larger measured mean misregistration should correspond to a larger mean error in the real scene.

Fig. 17

Mean errors: (a) scene A and (b) scene B. The cameras are ordered according to the magnitude of their mean error in scene A and scene B, respectively (index #1 represents the camera with the smallest mean error, whereas index #64 represents the camera with the largest mean error). The red curve shows the mean error for each of the 64 tested cameras when depicting the corresponding scene. The blue curve shows the mean error of the same 64 cameras as predicted by method 1. The green curve shows the mean error of the same 64 cameras as predicted by method 2.

OE_59_8_084103_f017.png

We see that the curve for method 1 (blue) follows the shape of the red curve quite well for both scenes, whereas method 2 (green)—as before—underestimates the misregistration of blurry cameras (indices 6, 18, 28, 38, and 44 for scene A; indices 6, 16, 27, 37, and 44 for scene B) compared to sharper cameras. Method 2 is, therefore, less suitable for evaluation of sharp and blurry cameras simultaneously. For sharper cameras, method 2 works reasonably well, but method 1 seems to give the best prediction of camera performance relative to other cameras—with respect to mean misregistration errors.

5.6.

Spatial Misregistration in the Along-Track Direction

The simulations in this paper were done in the across-track direction where the influence of the slit can be considered negligible.12 In the along-track direction, a sufficiently narrow slit (similar to the Airy disk size and smaller1316) will contribute to the PSF shape. Although this contribution is naturally taken into account during lab characterization, incorporating this contribution into simulations may be more challenging because the optical design software may not be capable of reproducing it. Nevertheless, since the same spatial misregistration factors are present also in the along-track direction, we expect the conclusions based on our simulations in the across-track direction to be valid in the along-track direction as well.

The majority of pushbroom hyperspectral cameras use a continuous scanning motion in the along-track direction during image acquisition. This motion blurs the SPSF in the along-track direction, and therefore, affects the errors caused by spatial misregistration. In principle, this motion should be taken into account when quantifying the misregistration.14 However, in the majority of cases, it may be more practical if the manufacturer specifies spatial misregistration without the scanning motion, since it is the user—i.e., not the manufacturer—who determines how the camera is used. The scanning movement per frame may be as large as a couple of across-track pixels or as small as zero (when the camera is moved along-track between exposures, but stays still during the image acquisition). In any case, the misregistration specification provided for a static camera will be useful for all scanning scenarios, because a camera that is the best in terms of spatial misregistration when there is no scanning motion, will still be the best camera if scanning motion is present.

6.

Applicability of the Methods for Spectral Misregistration

Spectral misregistration—variable spectral shift across the field of view (smile), as well as size and shape differences of the spectral response function of the camera optics (equivalent of PSF) across the field of view—is another important characteristic of a hyperspectral camera. High-contrast spectral features may sometimes be shaped as a border between a bright and a dark spectral region (such as the chlorophyll edge), but small features (such as atmospheric absorption lines) seem to be more common. Based on the results for spatial misregistration (see Fig. 9 for three types of spatial misregistration errors), it should be clear that all three above-mentioned factors will cause spectral misregistration. Analogous to SPSF in the spatial directions, in the spectral direction spectral response function (SRF) can be defined, and then both characterization methods can be used for quantifying this type of misregistration as well.6,7 For both methods, this can be done by illuminating the entire field of view of the camera by a narrow band source and scanning the wavelength of this source through the camera’s wavelength range. We have not performed simulations for spectral misregistration, but we would expect that the two characterization methods would behave similarly relative to each other and compared to errors in real scenes, as we have seen for spatial misregistration.

7.

Conclusion

Using images of real scenes as input and simulating camera performance (based on real cameras’ PSFs), we have shown that the following three factors: keystone, PSF width differences, and PSF shape differences—cause spatial misregistration errors in real scenes. The magnitude of the errors depends on the magnitude of each of these factors. Therefore, when designing a high-quality hyperspectral camera, all these three factors should be considered. For example, there is probably not much point to try and reduce keystone to 2% or 3% of a pixel if the PSF width and shape are very different in different spectral channels.

After seeing the actual values of errors caused by spatial misregistration, it becomes very clear that spatial misregistration is one of the key quality parameters of a hyperspectral camera that may easily exceed the contribution of photon noise and largely determines the accuracy of the acquired spectra. It is also clear that characterizing spatial misregistration by just measuring keystone is not the most accurate approach, since both the PSF width and PSF shape differences may cause errors of comparable magnitude in the acquired spectra.

The results of our simulations show that spatial misregistration can be reliably quantified in the lab. Two different characterization methods have been investigated. Both methods quantify total spatial misregistration rather than separately characterizing the different misregistration components. This has the advantage of allowing to directly compare cameras that differ from each other both with respect to keystone and PSF width and shape variations. Method 1 provides a reliable way to compare the relative performance between all tested cameras, but the method underestimates the magnitude of the maximum errors. Method 2 allows to compare the performance of reasonably sharp cameras (with MTF0.25 at Nyquist frequency) relative to each other and predicts well the magnitude of the maximum errors in the acquired spectra for these cameras. However, method 2 is less universally applicable than method 1, since method 2 noticeably underestimates the misregistration errors of blurry cameras relative to sharp cameras. For both mean and maximum errors, method 1 is sufficient when comparing cameras to each other and choosing the best one. By combining the two methods in a simple way, it is possible to also predict the magnitude of the maximum errors in the acquired spectra reliably for both sharp and blurry cameras. Combining the two methods also improves the accuracy of the camera comparison with respect to maximum errors.

The simulations shown in this paper were constructed to address a wide range of situations. The two test scenes used as input for the simulations have various high-contrast spatial features that are expected to give large errors in the acquired data due to spatial misregistration. Although it is not possible to claim that these scenes cover all possible scenarios, the two test scenes are quite different from one another and represent a quite wide range of situations relevant for hyperspectral imaging. The first scene contains high-contrast edges between relatively uniform areas as well as small subpixel sized objects, whereas the second scene mostly contains small objects. Both types of features are important for testing the two characterization methods since method 1 is based on the assumption that an edge between two different materials is the largest source of error, whereas method 2 considers point sources to be the largest offender. The range of keystone values chosen for the simulations represents reasonably well cameras of different qualities that are available on the market today. Further, camera PSFs chosen for the simulations have various widths ranging from very sharp cameras to quite blurry ones. Some PSFs also have the same width but distinctly different shapes. Based on this input, hyperspectral cameras with different misregistration factors of different magnitudes were simulated, with each misregistration factor presented alone and in combination with other misregistration factors. In total, 64 different hyperspectral cameras with practically relevant misregistration factors were simulated and tested on the two input scenes. The results of these simulations show the impact of the misregistration factors on the accuracy of hyperspectral data and test the practical usefulness of the two tested lab characterization methods with respect to predicting and comparing spatial misregistration errors for different hyperspectral cameras.

Spectral misregistration can be analyzed and quantified in a similar way. Based on the results for spatial misregistration, we expect that the three components of spectral misregistration—variable spectral shift (smile) and variable width and shape of the spectral response function of the camera optics—will also all cause errors in the acquired spectra. Further, we expect that, with respect to spectral misregistration, the two characterization methods will behave similarly relative to each other and when compared to errors in real scenes, as we have observed for spatial misregistration.

References

1. 

G. Høye and A. Fridman, “Spatial misregistration in hyperspectral cameras: lab characterization and errors in real-world images,” Proc. SPIE, 11406 114060D (2020). https://doi.org/10.1117/12.2557437 PSISDG 0277-786X Google Scholar

2. 

P. Mouroulis, R. O. Green and T. G. Chrien, “Design of pushbroom imaging spectrometers for optimum recovery of spectroscopic and spatial information,” Appl. Opt., 39 (13), 2210 –2220 (2000). https://doi.org/10.1364/AO.39.002210 APOPAI 0003-6935 Google Scholar

3. 

N. Yokoya, N. Miyamura and A. Iwasaki, “Preprocessing of hyperspectral imagery with consideration of smile and keystone properties,” Proc. SPIE, 7857 78570B (2010). https://doi.org/10.1117/12.870437 PSISDG 0277-786X Google Scholar

4. 

K. Lenhard et al., “Impact of improved calibration of a NEO Hyspex VNIR-1600 sensor on remote sensing of water depth,” IEEE Trans. Geosci. Remote Sens., 53 (11), 6085 –6098 (2015). https://doi.org/10.1109/TGRS.2015.2431743 IGRSD2 0196-2892 Google Scholar

5. 

M. T. Chahine et al., “Atmospheric infrared sounder (AIRS), science and measurement requirements,” D-6665 (1990). Google Scholar

6. 

T. Skauli, “An upper-bound metric for characterizing spectral and spatial coregistration errors in spectral imaging,” Opt. Express, 20 (2), 918 –933 (2012). https://doi.org/10.1364/OE.20.000918 OPEXFF 1094-4087 Google Scholar

7. 

G. Høye, T. Løke and A. Fridman, “Method for quantifying image quality in push-broom hyperspectral cameras,” Opt. Eng., 54 (5), 053102 (2015). https://doi.org/10.1117/1.OE.54.5.053102 Google Scholar

8. 

A. Fridman, G. Høye and T. Løke, “Resampling in hyperspectral cameras as an alternative to correcting keystone in hardware, with focus on benefits for the optical design and data quality,” Opt. Eng., 53 (5), 053107 (2014). https://doi.org/10.1117/1.OE.53.5.053107 Google Scholar

9. 

H. E. Torkildsen et al., “Characterization of a compact 6-band multifunctional camera based on patterned spectral filters in the focal plane,” Proc. SPIE, 9088 908819 (2014). https://doi.org/10.1117/12.2054477 PSISDG 0277-786X Google Scholar

10. 

T. Skauli, “Feasibility of a standard for full specification of spectral imager performance,” Proc. SPIE, 10213 102130H (2017). https://doi.org/10.1117/12.2262785 PSISDG 0277-786X Google Scholar

11. 

G. Høye and A. Fridman, “Performance analysis of the proposed new restoring camera for hyperspectral imaging,” (2010). Google Scholar

12. 

P. Mouroulis and R. O. Green, “Review of high fidelity imaging spectrometer design for remote sensing,” Opt. Eng., 57 (4), 040901 (2018). https://doi.org/10.1117/1.OE.57.4.040901 Google Scholar

13. 

K. D. Mielenz, “Spectroscope slit images in partially coherent light,” J. Opt. Soc. Am., 57 (1), 66 –74 (1967). https://doi.org/10.1364/JOSA.57.000066 JOSAAH 0030-3941 Google Scholar

14. 

J. F. Silny, “Resolution modeling of dispersive imaging spectrometers,” Opt. Eng., 56 (8), 081813 (2017). https://doi.org/10.1117/1.OE.56.8.081813 Google Scholar

15. 

L. Zellinger and J. F. Silny, “Best practices for radiometric modeling of imaging spectrometers,” Proc. SPIE, 9611 96110E (2015). https://doi.org/10.1117/12.2189966 PSISDG 0277-786X Google Scholar

16. 

P. Z. Mouroulis et al., “Optical design of a compact imaging spectrometer for planetary mineralogy,” Opt. Eng., 46 (6), 063001 (2007). https://doi.org/10.1117/1.2749499 Google Scholar

Biography

Gudrun Høye received her MSc degree in physics from the Norwegian Institute of Technology in 1994 and her PhD in astrophysics from the Norwegian University of Science and Technology in 1999. She is a researcher at the Norwegian Defence Research Establishment. She has also been working part time as a researcher at Norsk Elektro Optikk. Her current research interests include hyperspectral imaging and remote sensing from small satellites.

Andrei Fridman received his MSc degree in optics from the Technical University of Fine Mechanics and Optics, St. Petersburg, Russia, in 1994. He is a senior research scientist and optical designer at Norsk Elektro Optikk. In addition to his main optical design activities, his interests include image processing.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Gudrun Høye and Andrei Fridman "Spatial misregistration in hyperspectral cameras: lab characterization and impact on data quality in real-world images," Optical Engineering 59(8), 084103 (25 August 2020). https://doi.org/10.1117/1.OE.59.8.084103
Received: 29 May 2020; Accepted: 10 August 2020; Published: 25 August 2020
Lens.org Logo
CITATIONS
Cited by 1 scholarly publication.
Advertisement
Advertisement
KEYWORDS
Cameras

Point spread functions

Device simulation

Optical engineering

Modulation transfer functions

Error analysis

Hyperspectral imaging

Back to Top