Hostname: page-component-8448b6f56d-42gr6 Total loading time: 0 Render date: 2024-04-17T19:26:59.894Z Has data issue: false hasContentIssue false

Mapping sea-ice types from Sentinel-1 considering the surface-type dependent effect of incidence angle

Published online by Cambridge University Press:  23 June 2020

Johannes Lohse*
Affiliation:
Department of Physics and Technology, UiT The Arctic University of Norway, 9019Tromsø, Norway
Anthony P. Doulgeris
Affiliation:
Department of Physics and Technology, UiT The Arctic University of Norway, 9019Tromsø, Norway
Wolfgang Dierking
Affiliation:
Department of Physics and Technology, UiT The Arctic University of Norway, 9019Tromsø, Norway Alfred Wegener Institute, Helmholtz Center for Polar and Marine Research, Bussestr. 24, 27570Bremerhaven, Germany
*
Author for correspondence: Johannes Lohse, E-mail: johannes.p.lohse@uit.no
Rights & Permissions [Opens in a new window]

Abstract

Automated classification of sea-ice types in Synthetic Aperture Radar (SAR) imagery is complicated by the class-dependent decrease of backscatter intensity with Incidence Angle (IA). In the log-domain, this decrease is approximately linear over the typical range of space-borne SAR instruments. A global correction does not consider that different surface types show different rates of decrease in backscatter intensity. Here, we introduce a supervised classification algorithm that directly incorporates the surface-type dependent effect of IA. We replace the constant mean vector of a Gaussian probability density function in a Bayesian classifier with a linearly variable mean. During training, the classifier first retrieves the slope and intercept of the linear function describing the mean value and then calculates the covariance matrix as the mean squared deviation relative to this function. The IA dependence is no longer treated as an image property but as a class property. Based on training and validation data selected from overlapping SAR and optical images, we evaluate the proposed method in several case studies and compare to other classification algorithms for which a global IA correction is applied during pre-processing. Our results show that the inclusion of the per-class IA sensitivity can significantly improve the performance of the classifier.

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
Copyright © The Author(s), 2020. Published by Cambridge University Press

Introduction

Continuous monitoring and mapping of sea ice are important for a variety of reasons. Besides usage in environmental and climatological studies, timely and accurate high-resolution ice charts are needed to (a) support offshore operations as well as marine traffic and navigation, (b) generate long-term statistics of sea-ice conditions in particular regions of interest and (c) assimilate high-resolution sea-ice information into numerical models. Currently, the main and often only source on sea-ice conditions is remote-sensing data (Zakhvatkina and others, Reference Zakhvatkina, Smirnov and Bychkova2019). National ice services worldwide rely, in particular, on Synthetic Aperture Radar (SAR) observations, because of the radar's continuous imaging capability during darkness and its independence of cloud conditions (Scheuchl and others, Reference Scheuchl, Flett, Caves and Cumming2004; Dierking, Reference Dierking2010, Reference Dierking2013). At present, analysis of the images and production of ice charts is performed manually by ice analysts (Zakhvatkina and others, Reference Zakhvatkina, Smirnov and Bychkova2019). Ice chart production thus involves subjective decisions and is a time-consuming process. Yet many of the applications mentioned above require the processing of large numbers of images in near-real-time. Hence, robust and reliable automation of sea-ice type mapping is required to assist in operational ice charting.

A lot of effort has been put into research on the automated or semi-automated classification of sea ice in SAR images during the last decades. Different classification methods have been tested, including Bayesian classifiers (Scheuchl and others, Reference Scheuchl, Caves, Cumming and Staples2001; Moen and others, Reference Moen2013), Support Vector Machines (SVM) (Leigh and others, Reference Leigh, Wang and Clausi2014; Liu and others, Reference Liu, Guo and Zhang2015), Decision Trees (DT) (Geldsetzer and Yackel, Reference Geldsetzer and Yackel2009; Lohse and others, Reference Lohse, Doulgeris and Dierking2019), Neural Networks and Convolutional Neural Networks (NN and CNN) (Kwok and others, Reference Kwok1991; Hara and others, Reference Hara1995; Karvonen, Reference Karvonen2004; Zakhvatkina and others, Reference Zakhvatkina, Alexandrov, Johannessen, Sandven and Frolov2013; Ressel and others, Reference Ressel, Frost and Lehner2015) or Random Forests (RF) (Han and others, Reference Han2016). Advantages and disadvantages of different radar frequencies (Dierking, Reference Dierking2010; Eriksson and others, Reference Eriksson2010) have been investigated as well as combinations of sensors (Hollands and Dierking, Reference Hollands and Dierking2016) and the use of textural information (Barber and LeDrew, Reference Barber and LeDrew1991; Clausi, Reference Clausi2001), different polarizations and polarimetric features (Moen and others, Reference Moen, Anfinsen, Doulgeris, Renner and Gerland2015). Generic, automated analysis of ice types in SAR images, however, remains difficult. The main challenges include the general ambiguity of radar backscatter from different sea-ice types, varying wind states and thus changing surface roughness of open water, sensor-dependent noise in the data and, in particular, the Incidence Angle (IA) dependency of the backscattered signal. While we consider and discuss all of these factors within this study, our major focus will be the issue of IA dependency.

It is known that the IA of a radar signal onto a surface influences the intensity of the signal backscattered from that surface (Onstott and Carsey, Reference Onstott and Carsey1992). In a typical SAR image, this effect is visible as a global trend of image brightness in range direction, with generally higher backscatter values in near-range (low IA) and lower backscatter values in far-range (high IA). The IA effect is usually treated as a single image property and accounted for globally during pre-processing of the data before automatic classification. A range-dependent correction can be applied to normalize the backscatter across the image to a reference IA (Zakhvatkina and others, Reference Zakhvatkina, Alexandrov, Johannessen, Sandven and Frolov2013, Reference Zakhvatkina, Korosov, Muckenhuber, Sandven and Babiker2017; Karvonen, Reference Karvonen2014, Reference Karvonen2017; Liu and others, Reference Liu, Guo and Zhang2015) or to convert the normalized radar cross section from σ 0 values to γ 0 values, with γ 0 (σ 0/cospθq).

Although such a pre-processing correction improves the result of automatic classification, a global correction of the entire image neglects the fact that different surface types show varying rates of decrease in backscatter with IA. The more the rates differ for distinct classes, the more the classification is affected. The surface-type dependent rates for various sea-ice types and different radar frequencies have been investigated in several studies, such as Mäkynen and others (Reference Mäkynen, Manninen, Similä, Karvonen and Hallikainen2002), Mäkynen and Karvonen (Reference Mäkynen and Karvonen2017) or Mahmud and others (Reference Mahmud2018). Over the typical range of most spaceborne SAR sensors, i.e. roughly between 20° and 45°, backscatter intensity in decibel (dB) decreases approximately linear with IA (Fig. 1).

Fig. 1. Right panel: Linear dependency of HH backscatter intensity in dB with IA for two different surface types: OW and MYI. The two classes show considerable differences in the decrease of the backscatter intensity at HH-polarization as a function of IA. Left panel: distribution of backscatter intensity for both classes over the full IA range. Both distributions are highly affected and broadened by the IA effect.

The slopes of the linear functions, however, show large variations (Mäkynen and Karvonen, Reference Mäkynen and Karvonen2017). Reasons for this are different definitions of ice classes (that usually comprise different ‘pure’ ice types), regional differences of ice and snow conditions (Gill and others, Reference Gill, Yackel, Geldsetzer and Fuller2015), seasonal variations of meteorological conditions (in particular effects of melt-freeze cycles) and also differences in methodological approaches.

A straightforward surface-type dependent IA correction during pre-processing is not possible. As the surface type at a given position in the image is not known before classification, it is not possible to decide a priori which rate of decrease to apply. In this study, we demonstrate how to incorporate surface-type dependent IA effect into a supervised, probabilistic classification algorithm. We use overlapping SAR and optical images acquired under freezing conditions over several years to identify different ice types and select training and validation sets with the help of expert sea-ice analysts from the Norwegian Ice Service. We then assess the benefits and drawbacks of our developed method in comparison with other classification algorithms applied to the same dataset.

The remainder of this paper is structured as follows: in the following section we describe the dataset and the applied pre-processing steps. We provide definitions of the different ice classes and explain how we selected suitable training and validation sets for the algorithm, based on overlapping optical and SAR images from different locations distributed all over the Arctic. Next, we give a detailed explanation of the algorithm framework, followed by an outline of the study design. After presenting the results, we evaluate and discuss our findings and then summarize our main conclusions in the final section.

Data

Sentinel-1 SAR data

For the SAR imagery, we use Sentinel-1 (S1) Extra Wide swath (EW) data. S1 operates at C-band (5.4 GHz) in either single or dual-polarization mode. All S1 imagery is freely available and can be obtained for example from the Copernicus Open Access Hub. The EW data are typically provided in Single-Look Complex (SLC) or Ground Range Detected (GRD) format. The Level-1 GRD product is multi-looked and projected to ground range using an Earth ellipsoid model (Aulard-Macler, Reference Aulard-Macler2011). Its resulting spatial resolution depends on the number of looks. The Medium resolution product (GRDM) is provided at a pixel spacing of 40 m  × 40 m with an actual resolution of ~93 m  × 87 m and an estimated number of looks of 10.7.

In this study, we use only GRDM data at dual-polarization (HH and HV). To obtain a broad, Arctic-wide definition of sea-ice classes, we use data from different years (2015–19) and from many different locations all over the Arctic (Fig. 2). However, as surface melt affects the radar backscatter signature of sea ice, we restrict our current demonstration study of the algorithm principle to data acquired under freezing conditions in winter and early spring months.

Fig. 2. Locations of all S1 images used for manual identification of ice types and selection and verification of training and validation data. All images are acquired in winter or early springtime between 2015 and 2019.

SAR data processing

During processing, we apply ESA's thermal noise correction implemented in the Sentinel Application Platform (SNAP) and calibrate the data to obtain the normalized radar cross section σ 0. To further reduce speckle, we perform additional multi-looking with a sliding window. After testing different window sizes of 3 × 3, 5 × 5, 7 × 7 and 9 × 9 pixels, we chose 3 × 3 as the default window size for our processing. This proved to reduce speckle sufficiently, while at the same time keeping the spatial resolution at better than 200 m in each direction. Finally, we convert the backscatter intensities into dB. All S1 data presented in this study has been processed accordingly.

Overlapping SAR and optical data

For identification of different sea-ice types as well as the initial selection of training and validation regions we use overlapping SAR and optical data. We utilize the ‘sentinelAPI’ from the Python ‘sentinelsat’ module to search for spatially overlapping S1 and optical data (Sentinel-2 (S2) and LandSat-8 (L8)) across the entire Arctic, with a difference in an acquisition time of <3 h and a cloud cover of <30%. Different examples of overlapping SAR and optical scenes are shown in Figure 3.

Fig. 3. Examples of overlapping SAR (right, R = HV, G = HH and B = HH) and optical (left, RGB channels) data for the selection of training data. Example ROIs for different surface types, selected with the assistance of experienced ice analysts from the Norwegian Ice Service, are indicated with different colours (OW (blue), Brash/Pancake Ice (gray-blue), YI (purple), LFYI (yellow), DFYI (green), MYI (red)).

With the assistance of expert sea-ice analysts from the Norwegian Ice Service we have manually analyzed 80 such pairs of overlapping optical and SAR images to identify different sea-ice types and select Regions Of Interest (ROIs) with training and validation data in the S1 data. All identified ice types are listed in Table 1. The identification of many of these ice types is only possible from the combination of SAR and optical imagery, with the addition of expert experience. In particular, Level First-Year Ice (LFYI), Deformed First-Year Ice (DFYI) and Multi-Year Ice (MYI) are difficult to distinguish in optical images, depending on spatial resolution, light conditions, snow coverage and ice concentration.

Table 1. List of ice types identified from overlapping SAR and optical images

However, based on differences in radar backscatter in combination with information on the location and the ice history (which the ice analysts usually take into account), reliable ROIs can be defined. Areas with example ROIs are indicated in Figure 3.

As we explain in detail in the next section, our developed algorithm relies on an accurate estimation of the linear rate in backscatter intensity given in logarithmic scale with IA for each class. This estimation requires training data over a wide enough IA range, preferably over the entire swath width. Since the overlapping data with cloud-free conditions in the optical image often only cover a small range of the SAR image, we have investigated additional S1 images searching for homogeneous areas with a particular ice type covering a wide distance in range. The second column in Table 1 indicates for which ice types we could identify such homogeneous areas over a wider distance in range. In total, we used more than 100 S1 images for our definition of sea-ice types and the selection of training and validation regions. The spatial distribution of all these images is shown in Figure 2.

For the work presented in this study, we focus only on homogeneous ice classes. Pixels covering a mixture of different ice types or ice and water are not included in analyzing the IA sensitivity of different ice types. During classification, such mixed pixels will be assigned to the ice type that is dominant in the pixel or to another ice type with similar backscatter at the given IA. This, however, is not specific to our method but a general problem for all classification algorithms.

One particularly challenging class is Open Water (OW). Its overall backscatter values, as well as the slopes of backscatter intensity with IA, depend on radar parameters (frequency, polarization and look-direction) and environmental factors (wind speed and direction). In a first attempt, we have tried to train several OW classes for different wind conditions. However, in the presented case studies, we restrict ourselves to data acquired under similar wind conditions, which reveal a constant and steep slope.

Method

The major novelty of the presented work is the development of an algorithm framework that can directly incorporate per-class variation of backscatter intensity with IA into supervised classification. This method section, therefore, starts with the detailed explanation of the design of the algorithm, followed by a description of the study design to test and validate the method and compare it to approaches used in the past.

Algorithm design

The per-class incorporation of the IA effect into supervised classification can be implemented most straightforwardly in a Bayesian classifier. We thus give a short review of Bayesian classification before explaining our modifications to the traditional algorithm.

Bayesian classification

A Bayesian classifier is a well-known probabilistic classification method. Every sample x, i.e. image pixel in our case, is assigned to the most probable class ω i (Theodoridis and Koutroumbas, Reference Theodoridis and Koutroumbas2008). For Maximum Likelihood, the decision rule of a Bayesian classifier can be expressed as:

(1)$$\underline{x} \to \omega _i\quad {\rm if}\quad p( \underline{x} \vert \omega _i) > p( \underline{x} \vert \omega _k) \quad \quad \forall k\ne i.$$

Here, p(x|ω i) is the multi-variate class-conditional Probability Density Function (PDF) of class ω i. The PDFs must be known in order to assign an individual sample to a particular class and are usually estimated from training data with known class labels. For an unknown shape, the PDF can be approximated by kernel density estimation (Parzen, Reference Parzen1962). If we know the general shape of the PDF, we can directly estimate the shape parameters from the training data. Since we work with backscatter intensities in the log-domain (i.e. in dB) we can use the common assumption of a multi-variate Gaussian distribution. The PDF for class ω i is then fully described by its mean vector μi and covariance matrix Σi:

(2)$$p_i( \underline{x} \vert \omega _i) = \displaystyle{1 \over {{( 2\pi ) }^{{d \over 2}}\vert \Sigma _i\vert ^{{1 \over 2}}}}e^{-{1 \over 2}{( \underline{x} -{\underline{\mu } }_i) }^T\Sigma _i^{{-}1} ( \underline{x} -{\underline{\mu } }_i) }$$

Once the mean vector and covariance matrix have been determined for each class, Eqns (1) and (2) can be combined to classify any given test sample x. We later use this traditional Bayesian classifier with a Gaussian PDF with constant mean value and an image-wide correction of IA as one comparison method to our proposed algorithm.

The Gaussian IA classifier

In the case of SAR data, both the mean vector and the covariance matrix of an individual class can be affected by the distribution of the available training data over IA for that particular class. The spatial abundance distribution of training in the range direction is being translated through the IA relation (Fig. 1) into the spreading of the distributions in intensity. If most training data are available at near-range, the mean backscatter values will be higher compared to a case where most training data are available at far-range. If training data are available over a large range of IAs, the mean value will be somewhere in between, but the variance will be high due to the spread over the IA range. Although a global IA correction during pre-processing reduces these effects, incorrect slopes will still result in increased variances and insufficiently corrected means. Figure 4 shows an example of training data for two classes with clearly distinct IA dependency and the resulting histograms for global corrections along different slopes. Note that the distribution of an individual class becomes almost Gaussian when the class is corrected along the right slope.

Fig. 4. HH intensity in dB of training data for OW and MYI, with per-class and average linear slopes indicated by dashed lines (top panel). Per-class histograms of HH intensity are shown after global correction to 35° along each of the three individual slopes (bottom panel).

Instead of using a pre-processing correction, we suggest treating the IA sensitivity as an ice-class property. For the Bayesian approach, this means to make use of the approximately linear relationship between mean vector and IA and replace the constant mean vector μi with a linearly variable mean vector μi(Θ):

(3)$$\underline{\mu } _i( \Theta ) = \underline{a} _i + \underline{b} _i\cdot \Theta $$

The class-conditional PDF can then be written as a PDF with linearly variable mean:

(4)$$p_i( \underline{x} \vert \omega _i) = \displaystyle{1 \over {{( 2\pi ) }^{{d \over 2}}\vert \Sigma _i\vert ^{{1 \over 2}}}}e^{-{1 \over 2}{( \underline{x} -( {\underline{a} }_i + {\underline{b} }_i\Theta ) ) }^T\Sigma _i^{{-}1} ( \underline{x} -( {\underline{a} }_i + {\underline{b} }_i\Theta ) ) }$$

Instead of the mean vector μi the algorithm retrieves the intercept ai and slope bi (Eqn 3) for each class during the learning phase. The covariance is now calculated as the mean squared deviation relative to a mean value that depends on the IA. Its magnitude is lower since the spread due to the IA sensitivity that occurs when assuming a constant mean is reduced. Note that a remnant in sensitivity to IA may remain if nonlinear terms in the variation of the mean cannot be neglected. Finally, Eqns (1) and (4) can again be combined to classify any given test sample x. In the following, we refer to this classification method as the Gaussian IA (GIA) classifier.

Study design

We have implemented the GIA classifier according to the equations given above and investigate its performance on a variety of case studies for sea-ice type classification. For comparison, we apply other well-established classification methods that use a traditional, global IA correction with a constant slope for all classes. In the following section, we summarize the implementation details of the GIA classifier and give an overview of the various case studies, validation procedure and the selected comparison methods.

Implementation

We have coded the implementation of the GIA classifier in python in the same style as other supervised classifiers in the scikit-learn module (Pedregosa and others, Reference Pedregosa2011). During training, a classifier object clf is created, then the classification parameters (for the GIA classifier i.e. class-dependent slope, intercept and covariance matrix) are estimated by calling the object method clf.fit(). Afterwards, new samples can be classified by calling the object method clf.predict(). This implementation makes it very easy to switch between different classification methods and compare the GIA classifier to other approaches by simply changing how the classifier object is created.

During training in default mode, the GIA classifier retrieves the class-dependent slopes for each dimension from the available training data. However, we have added the possibility to manually prescribe slopes when initializing the classifier. This allows the user to set reasonable slopes if the training data for an individual class is not sufficient for accurate slope estimation, or to test slopes for certain ice types that have been reported in previous studies.

Case studies

To test and demonstrate the principle functionality of the GIA classifier, we start our analysis by investigating various two-class case studies. For this purpose, we select two classes out of the entire multi-class training and validation set. These simple two-class tests are most suitable to demonstrate and visualize the underlying concept of our developed method and to point out particular benefits and drawbacks of the method under well-defined conditions.

Since the HH polarization reveals a stronger IA sensitivity than the HV polarization, we first focus on testing our method using the HH channel only, which we denote ‘1D case’. HV polarization is less sensitive to IA but provides complementary information to improve Classification Accuracy (CA). In the next step, we therefore extend the classifier to include the HV channel as well. The resulting classifier uses both HH and HV intensity and we refer to it as a ‘2D case’.

Different 2-class case studies with both 1-D and 2-D instances of the GIA classifier offer the easiest way to gain insights into the newly introduced method for class-dependent IA correction during classification and we, therefore, regard them as instructive for the demonstration of the novel concept. However, ice type classification is typically a multi-class problem. Following the 2-class case studies, we thus extend the classifier to multiple classes and finally demonstrate its potential for automated ice chart production.

Validation

We calculate CA from a broad and independent validation set, which we obtain by randomly splitting the data points from all selected ROIs into training and validation sets. This means in practice that the validation is not carried out for individual images, but for various ice conditions found in the collection of different images with our ROIs. While the training sets are used for fitting the coefficients of the classifiers to the data, the validation sets are used to independently evaluate the classifiers’ performance based on CA. Varying the randomized train-test split of the ROIs allows us to estimate standard deviations for CA based on the selected dataset.

Comparison classifiers

In order to assess improvements in CA, we compare the GIA classifier to other commonly used classification methods. In particular, we test a RF classifier, a SVM and a traditional Bayesian Classifier with Gaussian PDF (denoted as BCG). All of these comparison algorithms have previously been shown to be useful for classification of sea-ice types. For all the comparison methods, we apply a global IA correction with a constant slope during pre-processing. For the global correction, we always choose the average slope of the involved classes. Furthermore, we optimize the comparison classifiers in terms of CA by performing a grid-search over the number of trees and the maximum depth of the RF and by testing different kernels for the SVM.

Results

Two-class case studies

We present the results of the algorithm performance on three different case studies with simplified two-class problems. The individual classes in these case studies are the classes with training data from large homogeneous areas (Table 1). Note that training and validation data are collected from numerous images and not from a single individual scene. The individual two-class problems presented here are:

  1. (1) OW vs LFYI

  2. (2) OW vs MYI

  3. (3) LFYI vs MYI

The results for a 1-D GIA classifier applied to the case study (2) using only HH intensity are illustrated in Figure 5. The classes OW and MYI have clearly distinct slopes of −0.72 dB/1° and −0.23 dB/1°, respectively. The GIA classifier can successfully handle the case for most IA ranges. Some misclassification occurs in the IA range where the backscatter coefficients of MYI and OW overlap (−7 dB to −13 dB). The slices of the histograms and class-conditional PDFs at different IAs (Fig. 5, bottom panel) illustrate the locally narrow Gaussian distributions and can furthermore indicate whether or not the data are well separable for a particular IA range. Figure 6 shows the improved results when extending the classifier to two dimensions, including HV intensity. Note that the HV intensity also shows a considerable slope for both classes with −0.33 dB/1° for OW and −0.23 dB/1° for MYI, respectively.

Fig. 5. 1D Example of the two-class case study (2), OW vs (MYI. The top panel shows HH intensity over IA, with true class labels (training) indicated on the left and predicted class labels (validation) indicated on the right, respectively. The bottom panel shows histograms and slices through the class-conditional PDFs with variable mean at three different IA locations. The IA locations are indicated with grey dashed lines in the training data.

Fig. 6. 2D Example of the two-class case study (2), OW vs MYI. HH intensity (top) and HV intensity (bottom) are shown over IA, with true class labels (training) indicated on the left and predicted class labels (validation) indicated on the right, respectively.

The corresponding per-class CA and mean per-class CA is given in Table 2. We compare 1-D (using only HH intensity) and 2-D classifiers (using intensities at HH and HV) and give results for all three case studies and all tested classifiers. CA standard deviation estimated by bootstrapping is in the range of 0.1 and 0.25% for all individual CA values. For the cases including the OW class, the GIA classifier achieves the best CA, while the results are very similar for the LFYI-vs-MYI case. Note that the slopes for these two ice types are quite similar with −0.27 dB/1° at HH and −0.26 dB/1° at HV for LFYI and −0.23 dB/1° at both HH and HV for MYI, respectively. We observe a general improvement in CA when adding the HV intensity for all classifiers and all case. In the 1-D examples, the classifiers using a global IA correction can favour one individual class in some cases. For case study (2), this results in a high score for one particular class (CAMYI) and a low score for another class (CAOW). Overall, the GIA classifier clearly performs best with the highest average per-class CA.

Table 2. Classification Accuracy (CA) for different classifiers tested on three individual two-class problems

All classifiers are tested in 1D (HH intensity only) and 2D (both HH and HV intensity). Global IA correction with a constant slope has been applied before BCG, RF and SVM classification to compare with per-class IA correction within the GIA classifier. Standard deviation for all the above values given is between 0.1 and 0.25.

Average processing time for training and prediction is also presented in Table 2. Since absolute timing always depends on hardware and software components as well as the implementation of the algorithm itself, we present processing times relative to each other. All times are given normalized to the processing time of the GIA classifier. Training of the traditional BCG is more than twice as fast as training of the GIA classifier, and prediction of new sample labels is slightly faster. Both RF and SVM take one to two orders of magnitude longer for both training and prediction compared to the GIA classifier.

Three-class case studies

Based on the results for the two-class case studies presented so far, we extend our analysis to a three-class problem. Following the three different two-class case studies, the three-class example is enumerated as a case study (4):

  1. (1) OW vs. LFYI vs MYI

The results for all tested classifiers are presented in Table 3. Again, the GIA classifier achieves the highest overall CA. Individual classes can occasionally score higher in other classifiers (e.g. CAMYI for the BCG, Table 3), but again this comes at the expense of lower scores for the other classes. Compared to the different two-class case studies, the average per-class CA of the 2D GIA classifier for the three-class case is ~3% lower than the CA of case studies (2) and (3), and almost equal to the CA of the case study (1).

Table 3. Classification Accuracy (CA) for different classifiers tested on a three-class problem OW-vs-LFYI-vs-MYI (case study (4))

All classifiers are tested in 1D (HH intensity only) and 2D (both HH and HV intensity). Global IA correction with a constant slope has been applied before BCG, RF and SVM classification to compare with per-class IA correction within the GIA classifier. Standard deviation for all the above values given is between 0.1 and 0.25 for individual values.

Ice charts

Results for mosaic ice type maps from two randomly selected dates are presented in Figure 7. Both examples show the underlying S1 data on the left side and the classification result on the right side. The example in the top panel covers nearly the entire Arctic and is based on 72 S1 images acquired on 3 and 4 March 2019. The example in the bottom panel covers a smaller region north of Svalbard and is based on three overlapping S1 images acquired on 5 April 2018. We regard the presentation of a combination of three images as useful, as it allows for inspection of classification results across image boundaries, which is potentially strongly influenced by IA effect. Figure 7 shows that the classification results connect consistently across image boundaries.

Fig. 7. Examples of mosaic classification results for the entire Arctic (top panel, based on 72 S1 images acquired on 3 and 4 March 2019) and a smaller region north of Svalbard (bottom panel, based on 3 S1 images acquired on 5 April 2018). S1 data are shown on the left (R = HV, G = HH and B = HH) and classification results on the right. The classified regions seamlessly overlap at image boundaries, indicating a successful per-class correction of IA effect.

Discussion

Algorithm behaviour

Overall, the proposed inclusion of the per-class IA sensitivity directly in the classification process performs very well for the test cases and improves the CA significantly. The underlying concept and the effect of the method is best visualized in a 1-D case for two classes (Fig. 5). The classes in the shown example for the case study (2), OW and MYI, have significantly different slopes and the GIA classifier clearly improves the classification result. Furthermore, histograms and slices of the PDF with variable mean at different base values IA0 give an indication of how well separable the data are in a particular IA range. In the presented case, OW and MYI are well separable at HH-polarization in near range. As we move towards far range, the overlap between the PDFs increases, resulting in a decreased separability. At an IA of ~35°, the linear functions describing the variable mean value for each class intersect. The local mean values of both classes are thus identical at this range and the distributions have maximum overlap; hence the two classes have worst separability (Fig. 5, bottom panel). Since the OW class has a smaller covariance than the MYI class, the maximum probability from the OW PDF is larger than the maximum probability from the MYI PDF. Hence, when local mean values are identical, pixels with a backscatter value close to this local mean will be classified as OW, and pixels with a backscatter value far from the local mean will be classified as MYI, resulting in a narrow band of pixels classified as OW overlying the broader band of pixels classified as MYI (Fig. 5, top panel, right). This ambiguity can only be solved by adding additional information (e.g. HV intensity or texture parameters) and thus extending the classifier to more dimensions (see Table 2, 1-D and 2-D classifier results). The extension of the GIA classifier to multiple dimensions is straightforward, given that Eqn (4) describes a multi-variate Gaussian with variable mean vector. For the two-class cases study (2), Figure 6 illustrates a clear improvement that is achieved by adding the HV component to the GIA classifier.

Algorithm performance

The comparison of the GIA classifier to BCG, SVM and RF classifiers is summarized in Tables 2 and 3. Note again that the selected classifiers for comparison are applied after a global IA correction was carried out in the pre-processing. When the slopes of the individual classes differ (2-class case studies (1) and (2)), the GIA classifier achieves the highest CA. For classes with very similar slopes (2-class case study (3)), the methods perform almost equally well. This is expected since in such a case the per-class IA correction inherent in the GIA classifier is almost identical to the global IA correction during pre-processing. When we extend the case studies to three classes, however, we note that the presence of one surface type with a significantly different slope, i.e. OW, affects the overall classification result (Table 3). Again, the GIA classifier performs better than the comparison methods. The improvement is particularly strong in the 1D case (~8%) and still almost 2% in the 2-D case. This is due to the variation in slope values of the individual classes. The largest difference in slope occurs between OW and MYI, where it is 0.49 dB/1° in the HH channel. In the HV channel, however, it is only 0.1 dB/1°. Hence, a global correction will achieve better results for HV than for HH.

Furthermore, it is interesting to compare the GIA classifier's average per-class CA of the three-class case against the different two-class cases. For the 1-D examples, the two-class case studies (1) and (3) achieve the highest scores, with 94.08% and 97.56%, respectively. Case study (2) scores considerably lower at 80.93%, while the three-class example score is at 83.73% (Tables 2 and 3). Generally, a lower score for the three-class problem can be expected, since the classification error of a Bayesian classifier corresponds to the overlap of the class-specific PDFs, and adding more PDFs may lead to more overlap. Two-class case study (2) however scores low in 1-D due to large overlap of the distributions, so that the average per-class score is increased when adding another, better separable class.

Processing time

For the presented case studies and the given implementations of the different classifiers, both Bayesian classifiers (BCG and GIA) are clearly superior to RF and SVM in processing time for both training and prediction. Considering a possible future application in operational ice charting, the timing for prediction of labels for new samples is particularly critical. Here, the BCG classifier is slightly faster than the GIA classifier, as it does not require the linear operation to estimate the local mean value from the class-dependent slope. However, both BCG and GIA classifiers are considerably faster than the RF or the SVM. As the computation time for prediction of class labels increases approximately linear with the number of trained classes for a Bayesian classifier, we expect this advantage to slightly decrease with the extension to more classes. Nevertheless, the GIA classifier is certainly suitable for fast processing of images and near-real-time classification, and in our investigated case studies it is clearly superior to SVM and RF in terms of computation time.

Current limitations and future work

We have so far shown how to incorporate the per-class correction of IA effect into a classifier and demonstrated that this direct incorporation can generally improve CA while operating at an operationally acceptable speed. Besides the classification of wide-swath images such as S1 EW, the GIA classification framework could also be applied to transfer training data across images acquired in quad-pol mode at different IA. In the following part of this section, we discuss the current limitations of the method and possible future development and improvement of the algorithm.

One of these limitations is the present definition and selection of the OW class. For the shown case studies, we have restricted ourselves to OW conditions with a steep and constant slope. However, in reality, OW is more challenging. For a given radar frequency and polarization, the OW slope and brightness depends on wind speed as well as the angle between radar look and wind direction. While one simple OW class may be useful in particular cases and for demonstrating the principle algorithm, it will not be sufficient for operational ice charting. Possible ways to deal with this issue may be the inclusion of texture features or the definition of several OW classes for varying environmental and acquisition conditions. However, both alternatives require thorough investigation and are beyond the scope of the present demonstration study.

Furthermore, in the presented case studies we have focused only on the ice types that we could observe over large homogeneous areas covering a wide distance in range (Table 1). Hence, the training and validation data are well defined for these cases, offering optimal conditions for the demonstration of how to incorporate the class-dependent IA effect directly into the classification process, and for the evaluation of the method in comparison to global IA correction during pre-processing. The slope values that we obtain represent a generalized, Arctic-wide class property for the particular classes as we have defined them based on the available input data. Comparison to literature values reveals slight differences between our values and the ones from previous studies, which are most likely caused by intra-class variability or slightly different class definitions. While the slopes given in this study are optimal for use over the entire Arctic (and may only have to be slightly adjusted when the input dataset is extended), a new estimation of slopes and thus re-training of the classifier may be valuable for more locally constrained studies.

In future work, we will focus on the extension of the algorithm to all identified classes and different seasons, as well as the improvement and validation of automated production of ice charts. Two examples of ice type maps and the underlying S1 data are shown in Figure 7. Rigorous validation of several such examples in terms of absolute CA is currently carried out in collaboration with the Norwegian Ice Service. The detailed evaluation of the results is, however, beyond the scope of the present demonstration study. Visual inspection of the examples presented in Figure 7, however, reveals additional capabilities and limitations of the current algorithm: The overall pan-Arctic distribution of ice types (Fig. 7, top panel) shows a reasonable pattern, with MYI dominating in the Canadian Arctic and starting to circulate in the Beaufort Gyre, and FYI (level and deformed) dominating in the Russian Arctic. The closeup ice chart located north of Svalbard (Fig. 7, bottom panel) shows the separation of MYI, deformed and level FYI and refrozen leads with young ice. The algorithm successfully finds distinct regions for each class. Note that the classification result is consistent across image boundaries. As the effect of IA angle is clearly visible across image boundaries in the SAR imagery and no pre-processing IA correction is applied, this indicates that the per-class correction of the IA effect within the GIA classifier is successful. Classification results overlap seamlessly. However, on visual inspection, we also find some misclassification in the ice charts, in particular in areas with low signal, where the noise in the HV channel influences results. Furthermore, we observe occasional confusion of young ice and MYI due to ambiguities in backscatter intensity. These ambiguities are not new issues, however, and have been described efor example, in Zakhvatkina and others (Reference Zakhvatkina, Korosov, Muckenhuber, Sandven and Babiker2017). They are not inherent to the algorithm framework presented in this study.

One common way to overcome such remaining ambiguities in the mapping of sea-ice types is the use of textural information. Texture features can for example be extracted from all channels in the data via the Gray-Level Co-Occurrence Matrix (GLCM). However, computation of the GLCM is time extensive and will significantly increase the processing time of any operational workflow. Furthermore, texture features are calculated within a window around the individual pixel, leading to an effectively reduced resolution of the result. Nevertheless, including textural information into the GIA classifier may be necessary for successful separation of all identified ice types. Texture features are commonly assumed to be less sensitive to IA, although to our best knowledge there is no systematic study yet that provides a detailed investigation of the variation of texture features with IA. In any case, the extension of the GIA classifier to other features is straightforward, as long as their IA dependence can be described by a simple function. Even if there is no IA dependence, the GIA classifier can be applied as-is, simply estimating a slope of zero for the individual features.

Conclusion

We have introduced a supervised classification algorithm for sea-ice types that directly incorporates class-dependent variation of backscatter intensity with IA. This is achieved by replacing the constant mean vector in a multi-variate Gaussian PDF of a Bayesian classifier with a linearly variable mean vector. The IA effect is thus no longer treated as a global image property and corrected during pre-processing, but as an ice type property. We have shown in several case studies that our proposed GIA classifier improves CA when the slopes for individual classes are significantly different. The simplicity and fast processing time of the GIA classifier allow for easy interpretation of results over the entire swath and enables processing of images in near-real-time, which is required for operational ice charting.

Although classification results are improved, some ambiguities and misclassified regions remain. In future work, we will focus on resolving these ambiguities by including further training and textural information into the GIA algorithm to further improve our automated mapping of sea-ice types.

Acknowledgements

This research was funded by CIRFA partners and the Research Council of Norway (RCN) (grant number 237906). The presented work contains primary and altered Sentinel data products (@Copernicus data). We thank the Norwegian Ice Service for their help in analyzing overlapping SAR and optical data to identify different ice types in the images. Furthermore, we also thank the anonymous reviewers for their constructive comments and feedback.

Data

The code for the GIA classifier framework selected training data and lists of image IDs can be obtained by contacting the first author.

References

Aulard-Macler, M (2011) Sentinel-1 product definition s1-rs-mda-52-7440. Technical report, MacDonald, Dettwiler and Associates Ltd.Google Scholar
Barber, DG and LeDrew, E (1991) Sar sea ice discrimination using texture statistics: a multivariate approach. Photogrammetric Engineering and Remote Sensing 57(4), 385395.Google Scholar
Clausi, DA (2001) Comparison and fusion of co-occurrence, gabor and mrf texture features for classification of sar sea-ice imagery. Atmosphere-Ocean 39(3), 183194. doi: 10.1080/07055900.2001.9649675.CrossRefGoogle Scholar
Dierking, W (2010) Mapping of different sea ice regimes using images from sentinel-1 and alos synthetic aperture radar. IEEE Transactions on Geoscience and Remote Sensing 48(3), 10451058. ISSN 0196-2892. doi: 10.1109/TGRS.2009.2031806.CrossRefGoogle Scholar
Dierking, W (2013) Sea ice monitoring by synthetic aperture radar. Oceanography 26(2), 100111. ISSN 10428275, 2377617X.CrossRefGoogle Scholar
Eriksson, LE and 7 others (2010) Evaluation of new spaceborne sar sensors for sea-ice monitoring in the baltic sea. Canadian Journal of Remote Sensing 36(sup1), S56S73. doi: 10.5589/m10-020.CrossRefGoogle Scholar
Geldsetzer, T and Yackel, JJ (2009) Sea ice type and open water discrimination using dual co-polarized c-band sar. Canadian Journal of Remote Sensing 35(1), 7384. doi: 10.5589/m08-075.CrossRefGoogle Scholar
Gill, JPS, Yackel, J, Geldsetzer, T and Fuller, MC (2015) Sensitivity of c-band synthetic aperture radar polarimetric parameters to snow thickness over landfast smooth first-year sea ice. Remote Sensing of Environment 166, 3449.CrossRefGoogle Scholar
Han, H and 6 others (2016) Retrieval of melt ponds on arctic multiyear sea ice in summer from terrasar-x dual-polarization data using machine learning approaches: a case study in the Chukchi sea with mid-incidence angle data. Remote Sensing 8(1), 123. doi: 10.3390/rs8010057.Google Scholar
Hara, Y and 5 others (1995) Application of neural networks for sea ice classification in polarimetric sar images. IEEE Transactions on Geoscience and Remote Sensing 33(3), 740748. ISSN 0196-2892. doi: 10.1109/36.387589.CrossRefGoogle Scholar
Hollands, T and Dierking, W (2016) Dynamics of the terra nova bay polynya: the potential of multi-sensor satellite observations. Remote Sensing of Environment 187, 3048. ISSN 0034-4257. doi: https://doi.org/10.1016/j.rse.2016.10.003.CrossRefGoogle Scholar
Karvonen, JA (2004) Baltic sea ice sar segmentation and classification using modified pulse-coupled neural networks. IEEE Transactions on Geoscience and Remote Sensing 42(7), 15661574. ISSN 0196-2892. doi: 10.1109/TGRS.2004.828179.CrossRefGoogle Scholar
Karvonen, J (2014) A sea ice concentration estimation algorithm utilizing radiometer and sar data. The Cryosphere 8(5), 16391650. doi: 10.5194/tc-8-1639-2014.CrossRefGoogle Scholar
Karvonen, J (2017) Baltic sea ice concentration estimation using sentinel-1 sar and amsr2 microwave radiometer data. IEEE Transactions on Geoscience and Remote Sensing 55(5), 28712883. doi: 10.1109/TGRS.2017.2655567.CrossRefGoogle Scholar
Kwok, R and 5 others (1991) Application of neural networks to sea ice classification using polarimetric sar images. In [Proceedings] IGARSS’91 Remote Sensing: Global Monitoring for Earth Management, I, 8588. doi: 10.1109/IGARSS.1991.577672.Google Scholar
Leigh, S, Wang, Z and Clausi, DA (2014) Automated ice-water classification using dual polarization sar satellite imagery. IEEE Transactions on Geoscience and Remote Sensing 52(9), 55295539. ISSN 0196-2892. doi: 10.1109/TGRS.2013.2290231.CrossRefGoogle Scholar
Liu, H, Guo, H and Zhang, L (2015) Svm-based sea ice classification using textural features and concentration from radarsat-2 dual-pol scansar data. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 8(4), 16011613. ISSN 1939-1404. doi: 10.1109/JSTARS.2014.2365215.CrossRefGoogle Scholar
Lohse, J, Doulgeris, AP and Dierking, W (2019) An optimal decision-tree design strategy and its application to sea ice classification from sar imagery. Remote Sensing 11(13), 115. doi: 10.3390/rs11131574.CrossRefGoogle Scholar
Mahmud, MS and 5 others (2018) Incidence angle dependence of hh-polarized c- and l-band wintertime backscatter over Arctic sea ice. IEEE Transactions on Geoscience and Remote Sensing 56, 113. ISSN 0196-2892. doi: 10.1109/TGRS.2018.2841343.CrossRefGoogle Scholar
Mäkynen, M and Karvonen, J (2017) Incidence angle dependence of first-year sea ice backscattering coefficient in sentinel-1 sar imagery over the kara sea. IEEE Transactions on Geoscience and Remote Sensing 55(11), 61706181. doi: 10.1109/TGRS.2017.2721981.CrossRefGoogle Scholar
Mäkynen, MP, Manninen, AT, Similä, MH, Karvonen, JA and Hallikainen, MT (2002) Incidence angle dependence of the statistical properties of c-band hh-polarization backscattering signatures of the baltic sea ice. IEEE Transactions on Geoscience and Remote Sensing 40(12), 25932605. doi: 10.1109/TGRS.2002.806991.CrossRefGoogle Scholar
Moen, MAN and 6 others (2013) Comparison of feature based segmentation of full polarimetric sar satellite sea ice images with manually drawn ice charts. The Cryosphere 7(6), 16931705. doi: 10.5194/tc-7-1693-2013.CrossRefGoogle Scholar
Moen, MAM, Anfinsen, SN, Doulgeris, AP, Renner, AHH and Gerland, S (2015) Assessing polarimetric sar sea-ice classifications using consecutive day images. Annals of Glaciology 56(69), 285294. doi: 10.3189/2015AoG69A802.CrossRefGoogle Scholar
Onstott, RG and Carsey, FD (1992) Sar and scatterometer signatures of sea ice. Microwave Remote Sensing of Sea Ice 68, 73104.CrossRefGoogle Scholar
Parzen, E (1962) On estimation of a probability density function and mode. The Annals of Mathematical Statistics 33(3), 10651076. ISSN 00034851.CrossRefGoogle Scholar
Pedregosa, F and 15 others (2011) Scikit-learn: machine learning in python. Journal of Machine Learning Research 12, 28252830.Google Scholar
Ressel, R, Frost, A and Lehner, S (2015) A neural network-based classification for sea ice types on x-band sar images. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 8(7), 36723680. ISSN 1939-1404. doi: 10.1109/JSTARS.2015.2436993.CrossRefGoogle Scholar
Scheuchl, B, Caves, R, Cumming, I and Staples, G (2001) Automated sea ice classification using spaceborne polarimetric sar data. In IGARSS 2001. Scanning the Present and Resolving the Future. Proceedings. IEEE 2001 International Geoscience and Remote Sensing Symposium (Cat. No.01CH37217) 7, 31173119. doi: 10.1109/IGARSS.2001.978275.CrossRefGoogle Scholar
Scheuchl, B, Flett, D, Caves, R and Cumming, I (2004) Potential of radarsat-2 data for operational sea ice monitoring. Canadian Journal of Remote Sensing 30(3), 448461. doi: 10.5589/m04-011.CrossRefGoogle Scholar
Theodoridis, S and Koutroumbas, K (2008) Pattern Recognition, 4th Edn. Orlando, FL, USA: Academic Press, Inc., ISBN 9781597492720.Google Scholar
Zakhvatkina, NY, Alexandrov, VY, Johannessen, OM, Sandven, S and Frolov, IY (2013) Classification of sea ice types in envisat synthetic aperture radar images. IEEE Transactions on Geoscience and Remote Sensing 51(5), 25872600. ISSN 0196-2892. doi: 10.1109/TGRS.2012.2212445.CrossRefGoogle Scholar
Zakhvatkina, N, Korosov, A, Muckenhuber, S, Sandven, S and Babiker, M (2017) Operational algorithm for ice–water classification on dual-polarized radarsat-2 images. The Cryosphere 11(1), 3346. doi: 10.5194/tc-11-33-2017.CrossRefGoogle Scholar
Zakhvatkina, N, Smirnov, V and Bychkova, I (2019) Satellite sar data-based sea ice classification: an overview. Geosciences 9(4), 115. doi: 10.3390/geosciences9040152.CrossRefGoogle Scholar
Figure 0

Fig. 1. Right panel: Linear dependency of HH backscatter intensity in dB with IA for two different surface types: OW and MYI. The two classes show considerable differences in the decrease of the backscatter intensity at HH-polarization as a function of IA. Left panel: distribution of backscatter intensity for both classes over the full IA range. Both distributions are highly affected and broadened by the IA effect.

Figure 1

Fig. 2. Locations of all S1 images used for manual identification of ice types and selection and verification of training and validation data. All images are acquired in winter or early springtime between 2015 and 2019.

Figure 2

Fig. 3. Examples of overlapping SAR (right, R = HV, G = HH and B = HH) and optical (left, RGB channels) data for the selection of training data. Example ROIs for different surface types, selected with the assistance of experienced ice analysts from the Norwegian Ice Service, are indicated with different colours (OW (blue), Brash/Pancake Ice (gray-blue), YI (purple), LFYI (yellow), DFYI (green), MYI (red)).

Figure 3

Table 1. List of ice types identified from overlapping SAR and optical images

Figure 4

Fig. 4. HH intensity in dB of training data for OW and MYI, with per-class and average linear slopes indicated by dashed lines (top panel). Per-class histograms of HH intensity are shown after global correction to 35° along each of the three individual slopes (bottom panel).

Figure 5

Fig. 5. 1D Example of the two-class case study (2), OW vs (MYI. The top panel shows HH intensity over IA, with true class labels (training) indicated on the left and predicted class labels (validation) indicated on the right, respectively. The bottom panel shows histograms and slices through the class-conditional PDFs with variable mean at three different IA locations. The IA locations are indicated with grey dashed lines in the training data.

Figure 6

Fig. 6. 2D Example of the two-class case study (2), OW vs MYI. HH intensity (top) and HV intensity (bottom) are shown over IA, with true class labels (training) indicated on the left and predicted class labels (validation) indicated on the right, respectively.

Figure 7

Table 2. Classification Accuracy (CA) for different classifiers tested on three individual two-class problems

Figure 8

Table 3. Classification Accuracy (CA) for different classifiers tested on a three-class problem OW-vs-LFYI-vs-MYI (case study (4))

Figure 9

Fig. 7. Examples of mosaic classification results for the entire Arctic (top panel, based on 72 S1 images acquired on 3 and 4 March 2019) and a smaller region north of Svalbard (bottom panel, based on 3 S1 images acquired on 5 April 2018). S1 data are shown on the left (R = HV, G = HH and B = HH) and classification results on the right. The classified regions seamlessly overlap at image boundaries, indicating a successful per-class correction of IA effect.