Next Article in Journal
OCO-2 Solar-Induced Chlorophyll Fluorescence Variability across Ecoregions of the Amazon Basin and the Extreme Drought Effects of El Niño (2015–2016)
Next Article in Special Issue
Time-Series Model-Adjusted Percentile Features: Improved Percentile Features for Land-Cover Classification Based on Landsat Data
Previous Article in Journal
Groundwater Potential Mapping Using Remote Sensing and GIS-Based Machine Learning Techniques
Previous Article in Special Issue
Bidirectional Segmented Detection of Land Use Change Based on Object-Level Multivariate Time Series
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Landsat Images Classification Algorithm (LICA) to Automatically Extract Land Cover Information in Google Earth Engine Environment

by
Alessandra Capolupo
*,
Cristina Monterisi
and
Eufemia Tarantino
Department of Civil, Environmental, Land, Construction and Chemistry (DICATECh), Politecnico di Bari, via Orabona 4, Bari 70125, Italy
*
Author to whom correspondence should be addressed.
Remote Sens. 2020, 12(7), 1201; https://doi.org/10.3390/rs12071201
Submission received: 20 February 2020 / Revised: 6 April 2020 / Accepted: 6 April 2020 / Published: 8 April 2020
(This article belongs to the Special Issue Multitemporal Land Cover and Land Use Mapping)

Abstract

:
Remote sensing has been recognized as the main technique to extract land cover/land use (LC/LU) data, required to address many environmental issues. Therefore, over the years, many approaches have been introduced and explored to optimize the resultant classification maps. Particularly, index-based methods have highlighted its efficiency and effectiveness in detecting LC/LU in a multitemporal and multisensors analysis perspective. Nevertheless, the developed indices are suitable to extract a specific class but not to completely classify the whole area. In this study, a new Landsat Images Classification Algorithm (LICA) is proposed to automatically detect land cover (LC) information using satellite open data provided by different Landsat missions in order to perform a multitemporal and multisensors analysis. All the steps of the proposed method were implemented within Google Earth Engine (GEE) to automatize the procedure, manage geospatial big data, and quickly extract land cover information. The algorithm was tested on the experimental site of Siponto, a historic municipality located in Apulia Region (Southern Italy) using 12 radiometrically and atmospherically corrected satellite images collected from Landsat archive (four images, one for each season, were selected from Landsat 5, 7, and 8, respectively). Those images were initially used to assess the performance of 82 traditional spectral indices. Since their classification accuracy and the number of identified LC categories were not satisfying, an analysis of the different spectral signatures existing in the study area was also performed, generating a new algorithm based on the sequential application of two new indices (SwirTirRed (STRed) index and SwiRed index). The former was based on the integration of shortwave infrared (SWIR), thermal infrared (TIR), and red bands, whereas the latter featured a combination of SWIR and red bands. The performance of LICA was preferable to those of conventional indices both in terms of accuracy and extracted classes number (water, dense and sparse vegetation, mining areas, built-up areas versus water, and dense and sparse vegetation). GEE platform allowed us to go beyond desktop system limitations, reducing acquisition and processing times for geospatial big data.

Graphical Abstract

1. Introduction

Accurate maps of land cover/land use (LC/LU) distribution are essential to gather information which is useful in many land management and environmental monitoring tasks. Therefore, over the last 15 years [1], several products have been generated to face the growing demands for related maps, using different approaches. Among these, the remote sensing technique has been an invaluable source of LC/LU information [2,3]. However, most of the satellite-derived maps covering the whole world have a coarse resolution, not suitable to describe the true Earth heterogeneity and urban and agricultural landscapes. For instance, the Global Land Cover product (GLC2000) carried out by the European Commission’s Joint Research Center (JRC) in 2000, the Globcover product realized by the European Space Agency, and the Moderate-resolution Imaging Spectroradiometer (MODIS) Collection 5 Land Cover database show a resolution of about 1 km at the equator (larger at higher latitudes) [4,5], 300 m [6,7], and 500 m [8], respectively. The 300-m global Climate Change Initiative Land Cover (CCI-LC) maps covering the period from 1992 to 2015 were utilized to assess the quality of the unchanged training sample pixels in five time periods. The Global Land Cover Characterization database (GLCC), produced by the effort between the U.S. Geological Survey (USGS), University of Nebraska Lincoln (UNL), and the JRC [9], shows a resolution of 1 km as well. Recent research activities have shown that, due to their coarse resolution, most of the listed datasets are not reliable over urban and agricultural areas since they show substantial disagreement with each other and with national statistics [1,10,11]. A much smaller number of high-resolution LC/LU maps, based on the available Landsat data, were generated at large scale and at various timescales as well. Nevertheless, such maps were produced for forestry purposes and, consequently, they do not report LC/LU information [12]. Similarly, Landsat data were also applied to provide contemporary data on human population distributions in Africa, Asia, and the Americas (WorldPop project) [13]. Only three Landsat-based global land cover maps are currently available: Finer Resolution Observation and Monitoring of Global Land Cover (FROM-GLC) by [14], GlobeLand30 by [15], and Normalized Urban Areas Composite Index (NUACI) derived maps by [16]. FROM-GLC and GlobeLand30 provide LC/LU information for the years of 2000 and 2010. Conversely, [16] provided LC/LU maps for the period 1990-2010 at five-year intervals. The situation changes at a continental, national, and regional scale, where Landsat and Sentinel images were widely used in many applications [17,18,19,20,21,22,23,24].
Landsat archive has provided a limitless well of information since 1972, freely available and accessible and, consequently, suitable for describing Earth surface features. However, as underlined by [18], Landsat satellite implies three main challenges:
(1)
Dealing with a low number of useful images: Just satellite images with minimal cloud cover are acceptable. Thus, the amount of adequate data depends on the weather conditions of the experimental site. Consequently, areas characterized by a high rainfall, such as tropical and subtropical regions, show a lower number of available adequate images;
(2)
Identifying an efficient platform suitable for large image data processing; and
(3)
Developing adequate image classification methods with satisfactory performance.
The introduction of Google Earth Engine (GEE) (https://earthengine.google.org), a cloud processing platform designed and developed over the last years by Google, offered a large number of available tools to face the first two issues [25]). As emphasized by [26], GEE integrates a data catalogue, continuously updated and composed of publicly available geospatial datasets, which may be consulted by users through the application programming interface (API). Therefore, operators can handle hundreds of data sources simultaneously and detect their quality and usefulness to meet their purposes. Moreover, nearly 6000 scenes belonging to active missions are integrated into GEE catalogue daily. As well as private data, such scenes can be processed by applying a set of complex and advanced algorithms implemented in GEE environment, exploiting its excellent computational power. Unlike desktop software, it involves many processors in running custom algorithms, speeding the process up considerably, and deleting the problems linked to the storage, the processing, and the analysis of a large volume of geospatial data [25]. For example, [12] tracked forest cover changed over a period of 12 years (2000–2012) at global scale by analyzing 654,178 Landsat 7 scenes (707 terabytes) on GEE platform. The milestone was achieved in 100 h, while a standard desktop computer would have taken nearly 1,000,000 h to meet the same needs. In addition, GEE is more flexible than the software commonly applied to process geospatial data, such as Environment for Visualizing Images (ENVI) and Earth Resources Data Analysis System (ERDAS) Imagine, since users can implement their own custom codes. Although its potentialities are enormous, GEE is still in development and, consequently, many existing algorithms have not been programmed and integrated into the platform yet [25]. Moreover, its great versatility sets it as the standard to deal with the third challenge implied by Landsat satellite and to carry out several classification approaches introduced over the years.
Basically, the classification algorithms are grouped into two categories: Unsupervised and supervised approaches. The former aggregates the pixels of an image in classes by analyzing the similarity of attributes, without any analyst’s contribution [16]. Such methods are commonly applied when the knowledge about the land cover is scarce. By contrast, the operators’ work is a key factor for the second group: they identify some training areas to coach the algorithms and to assign each pixel of the images in a specific category [22]. Although the first approaches are completely automatized, they are extremely time consuming since they require operator input to improve the accuracy of a classification map. However, supervised classification is not error-free either and the analyst has to refine the outcomes.
Among the several methodologies developed to extract LC/LU information belonging to both groups, index-based approach method [27], maximum likelihood supervised classification (ML) [28], machine learning algorithms (MLAs) [29] and object-based image analysis (OBIA) approach [30,31,32,33] are the most popular. Yet, each of them shows some strengths and weakness [34]. Although the index-based approach allows us to reduce the amounts of components and to classify a large area in a short time, several indices must be applied to detect the different LC/LU classes since each of them is aimed at distinguishing just one category [35]; for instance, vegetation indices are intended to identify "green areas" and so on. ML is recognized as one of the simplest algorithms to implement and to interpret [36], but its results are not satisfying without introducing a large amount of training areas since, because of insufficient a priori information, it assumes an equal a priori probability for each land cover classes [29]. Completely opposite are the MLAs, which comprise different approaches, such as artificial neural networks [37], support vector analysis (SVA) [38], and random forests (RF) [39]. Nevertheless, although these algorithms are efficient [40] and show more accurate results than the other conventional methods [29,41], MLAs are difficult to be implemented since, generally, a large volume of parameters must be fixed [40]. Moreover, MLAs tend to over-fit data [40]. There are some exceptions since each MLA shows peculiar traits and reveals different performances. Conversely to the other approaches included in the MLA group, SVA requires a smaller number of data training [42] and RF does not over-fit data due to the law of large numbers and allows us to reduce training dataset size with the consequent increment of overall error [43]. In contrast to the other methods, OBIA classification is based on the integration of spectral and geomorphological factors, which increase the accuracy of the resultant classification map [44]. Nevertheless, its outcomes look really promising if medium- or fine-resolution data are used as input [45].
Thus, none of the listed techniques allows us to generate optimal outcomes in all conditions. Therefore, the approach to apply should be selected considering multiple aspects, such as data type, spatial resolution, accuracy, operator skills, speed, classifier interpretability, and knowledge of ground truths. In [13], it showed that an index-based classification approach is efficient and effective for automatically extracting LC/LU information in multitemporal and multisensory analysis perspectives. The index-based approach involves the combination of two or more spectral bands, in order to classify Earth’s features. Each coverage, indeed, showed a specific spectral signature, commonly recognized as their fingerprint, according to their ability of absorbing, transmitting, and reflecting the energy [28]. Thus, properly integrating particular wavelengths, distinctive of a specific element, allows us to detect LC/LU classes. Although several indices have been introduced in literature, we are still lacking an index-based method suitable for classifying the whole study area by using different Landsat satellite images. In fact, each index is based on the integration of different spectral bands in order to address a specific need and to extract a certain LC/LU class [46,47,48,49].
The objective of this paper is to introduce a new classification algorithm to process Landsat images (Landsat Images Classification Algorithm: LICA) in GEE environment to automatically extract LC/LU information. This method was implemented after the analysis of the performance of 82 indices, commonly applied in literature, to detect land cover classes processed in a more efficient way and by increasing the accuracy of final results. LICA is composed by the computation of two new indices, SwirTirRed (STRed index) and SwiRed, introduced in this paper for the first time: The former aimed to detect water, mining areas, and sparse and dense vegetation while, the latter, built-up areas. LICA reliability was tested on the pilot site of Siponto using 12 Landsat images, belonging to missions 5, 7, and 8, as input data. Those images were acquired in different seasons and years, covering a period of about 17 years, in order to demonstrate that it produces a baseline information suitable for performing multitemporal, multiseasons, and multisensory change detection analysis.

2. Materials and Methods

2.1. Study Area

The method was tested along the coastline of Siponto in the Apulian Region (Southern Italy), studying an area bordered by the Mediterranean relief of Gargano to the north, the marshland to the south, the Candelaro estuary river and the Adriatic Sea to the west and east, respectively (Figure 1). The area, located about 2 km far from the city center of Manfredonia, was selected as an experimental site both because of its historical relevance and the changes suffered by its landscape over the years. This choice allowed us to test the performance of the proposed algorithm and to assess its accuracy on a zone characterized, over the years, by different features, configurations, and issues, such as the erosion process.
Founded in 194 BC, Siponto became a crucial commercial and maritime hub during the Roman period, as proven by the Archaeological Park of Siponto. Its relevance gradually slackened as a result of the depopulation process that followed the swamping of its seaport and two devastating earthquakes in 1223 and 1255. From then on, as highlighted by [49], its territory was essentially earmarked to agricultural purposes, exploiting the dense network of irrigation ditches available in that environment. This trend was only inverted over the last few years as tourism started to develop, being encouraged by the beauty of the local landscape and favorable climate conditions. These elements were not the only triggering factors of the soil erosion process suffered by this area. The construction of the new port in Margherita di Savoia in 1952 was, in fact, currently recognized as its main cause [50]. Although such problems are well known and about 80% of the shoreline conservation activities performed in the Apulia Region have addressed the investigated area, erosion issues are still not solved [50].

2.2. Landsat Image Classification Algorithm (LICA)

Classification methods allowed us to generate thematic maps, assigning each pixel to the proper belonging class. As proposed by [46], an index-based approach was efficient to quickly reveal LC/LU classes from satellite images and, therefore, in this case, it was preferred to other classification approaches. By mixing spectral bands’ information, spectral indices are able to bring out Earth’s features capacity in absorbing, reflecting, and transmitting the energy [51]. For this purpose, 82 consolidated indices, commonly applied in literature, were computed to extract LC/LU information (Table 1). Twenty-six of them were selected to detect bare soil and built-up areas, while the remaining 56, called vegetation indices (VIs), were aimed at identifying vegetation. Conventional indices were tested to bring out the potentiality of the strongest and weakest bands in extracting land cover types by verifying their reliability in the area under investigation. While all the algorithms were easy to implement, just three of them provided accurate results, i.e., Optimized Soil Adjusted Vegetation Index (OSAVI) [52] (Equation (1)), Green Optimized Soil Adjusted Vegetation Index (GOSAVI) [53] (Equation (2)), and Normalized Difference Bareness Index (version 2) (NDBaI2) [54] (Equation (3)).
O S A V I = 1.16 × ( N I R R ) N I R + R + 0.16
G O S A V I = N I R G N I R + G + 0.16
N D B a I 2 = S W I R 1 T I R 1 S W I R 1 + T I R 1
where NIR is the near-infrared band, R is the red component, G is the green band, SWIR and TIR are the shortwave infrared and the thermal infrared bands. The first two indices (OSAVI and GOSAVI) are included in the VIs group and, consequently, they are suitable for classifying dense and sparse vegetation. Conversely, NDBaI2 can correctly classify a higher number of categories: Built-up areas, mining areas, water, bare soil, and dense and sparse vegetation.
Considering the number of LC/LU classes detected by each index and their best overall accuracy (Table 1), NDBaI2 appeared as the most reliable index and was consequently used as the starting point to develop LICA procedures. NDBaI2 is based on the combination of SWIR1 and TIR1 (Equation (3)) and, therefore, this led us to believe that those bands should be the most essential to classify the whole study areas. This consideration was also supported by literature review since LC/LU classes are strongly affected by TIR [48] and SWIR, usually applied to distinguish bare soil and built-up areas [55,56]. Moreover, SWIR also allowed us to distinguish sparse and dense vegetation because of its dependency from the amount of water content in leaves [50,51]. Then, [57,58,59] enhanced the importance of the red band since it is linked to the energy absorbed by chlorophyll. In addition, these data were also integrated with the information retrieved through the spectral signatures’ examination of each LC/LU category existing in the study area (Figure 2). SWIR1 band showed a great difference among mining areas, water, and sparse and dense vegetation. On the contrary, TIR1 displayed different values among water, bare soil, mining, and built-up areas (Figure 2). In addition, Figure 2 enhances the contribution of red band as well to distinguish water, bare soils, mining, and built-up areas.
Therefore, SWIR, TIR, and R were integrated to classify water, mining areas, and sparse and dense vegetation. Conversely, just SWIR and R were combined to detect built-up areas. The first index, called SwirTirRed index (STRed index), is reported in Equation (4). The second one, named SwiRed index, is described by Equation (5).
S T R e d   i n d e x = S W I R 1 + R T I R 1 W I R 1 + R + T I R 1
S w i R e d   i n d e x = S W I R 1 R S W I R 1 + R
The workflow of Landsat Images Classification Algorithm (LICA) is reported in Figure 3.
LICA is generated by the sequential computation of the two introduced new indices (STRed and SwiRed index) on the outcome of a cloud masking procedure, performed on atmospherically corrected Landsat images. Once their implementation was completed, thresholds to identify each LC/LU class were set (Table 2) and the resultant maps were merged. Figure 3 describes the suggested workflow to be set.

2.3. Database Construction in GEE Platform

GEE (https://earthengine.google.com/) is a cloud computing environment designed and released by Google in the last few years to overcome desktop platforms’ limitations related to the storage and the management of a huge amount of geospatial data [25]. Such a platform is characterized by a dedicated high-performance computing (HPC) infrastructure that provides an interactive developing environment directly connected to the available open data, such as Landsat and Sentinel images archive, as well as digital elevation models, vector, socio-economic, topographic, and climate layers sets [20]. Therefore, these data can be directly downloaded both in raw and preprocessed format, minimizing their acquiring and processing time, in GEE platform. To meet the purpose of our research, 12 scenes covering a period of 17 years, from 2002 to 2019, radiometrically and atmospherically corrected, belonging to LANDSAT missions 5, 7, and 8, referring to the experimental area of Siponto, were selected (Table 3). Particularly, four images were collected for each mission, each of them belonging to a different season (winter, spring, summer, and fall). The collected images were provided in the Universal Transverse Mercator (UTM) projection and the World Geodetic System (WGS84) datum.
As shown in Table 3, cloud cover information was also considered: Only scenes characterized by a cloud cover value lower than 20% were taken into account in the data selection phase. Where needed, clouds were subsequently masked through the adoption of proper filters, based on the exploitation of the information provided by the quality assessment (QA) band, already implemented in GEE, as suggested by [122] and [123]. In this way, the cloudy pixels were rendered transparent and, therefore, excluded from further algorithm implementation.
On the contrary, selected images were not orthorectified since the geometric accuracy provided by USGS was satisfactory. Therefore, the developed classification algorithm was directly computed on the outcome of the cloud cover masking procedure, as described in the workflow reported in Figure 3. Landsat archive analysis, cloud masking process, and all the further processing phases were performed on the cloud, exploiting GEE interactive environment.

2.4. Implementation of Classification Indices and LICA in GEE

Once the images were downloaded and preprocessed, the JavaScript application programming interface (API), implemented in the GEE, was used to integrate the spectral bands and estimate the indices, commonly used in literature to classify satellite images. In Section 2.2 the calculated indices were described in detail. The documentation for combining spectral bands is reported at https://developers.google.com/earth-engine (accessed 2 September 2019). Subsequently, the proposed workflow (Figure 3) for automatically classifying Landsat images was implemented and LICA images were then generated. Class distinctions were obtained using LICA thresholds (Table 2).

2.5. Strategies to Evaluate the Accuracy

A multitemporal reference dataset based on a stratified random sampling point was generated to assess the accuracy of the proposed approach [124,125]. A total of 11,245 pixels as testing samples, proportionally distributed in each class according to their extension, were selected. Therefore, 1328 pixels were used to verify the accuracy of water, 492 pixels for built-up areas, 151 pixels for mining areas, 3165 pixels for mining areas, and 755 and 924 pixels were implemented to verify the accuracy of sparse and dense vegetation categories, respectively. Subsequently, a manual interpretation was performed to label samples according to their allocation. Samples were overlapped on the corresponding original Landsat data, manually interpreted in order to detect land cover information, and assigned to a specific class. This procedure was separately implemented on each resultant classification map.
The metrics of overall accuracy (OA), producer’s accuracy (PA), and user’s accuracy (UA) were next computed to perform a per-pixel accuracy assessment of classification procedure outcomes [29,126,127,128,129]. OA, PA, and UA showed a value between 0 and 1: The higher the values, the better the accuracy.
Finally, the performance of the introduced algorithm was compared to the one achieved by each index commonly applied in literature to verify its advantages and disadvantages.

3. Results

3.1. Classification Results

This section is dedicated to the classification procedure outcomes obtained through the application of indices consolidated in literature (Figure 4, Figure 5 and Figure 6) and the proposed LIC algorithm (Figure 7 and Figure 8). Traditional indices didn’t show satisfying results, except for OSAVI, GOSAVI, and NDBaI2, which were presented. Moreover, since their performance was similar for all the Landsat missions considered, for the sake of brevity, just the outcomes generated from the processing of Landsat 8 (17 March 2019) are reported.
OSAVI algorithm distinguishes three classes (water, and dense and sparse vegetation) (Figure 4). Nevertheless, the classification was not accurate since some misclassified pixels could be pinpointed between dense and sparse vegetation, as highlighted on the right side of Figure 4. This means that it cannot correctly detect different types of vegetation, its density, or health status. This is confirmed by analyzing the accuracy of its performance, reported in the following section (see Section 3.2).
GOSAVI algorithm demonstrated a similar trend, as it could only distinguish three classes (water, and dense and sparse vegetation) as well. Like OSAVI, it presented some misclassified pixels, reported on the right side of Figure 5, yet it did not show problems in classifying dense and sparse vegetation. This improvement was due to the introduction of a green band in the OSAVI computation to register the information of leaf pigments. The observed issues were related to water detection. Its classification accuracy is reported in Section 3.2.
In contrast to OSAVI and GOSAVI, NDBai2 allowed us to detect more classes: In addition to water, and dense and sparse vegetation, mining areas and built-up areas were also distinguished (Figure 6). Despite the improved performance, its accuracy was lower and some issues were detected on the resultant map: Built-up areas were generally classified as mining areas, whereas dense vegetation was confused with sparse vegetation and water (Figure 6). This was confirmed by its confusion matrix (see Section 3.2)
As described in Section 2.2, LICA consisted of two different steps: The former intended to classify water, mining areas, and dense and sparse vegetation (Figure 7, Figure 8 and Figure 9); the latter aimed at identifying built-up areas (Figure 10, Figure 11 and Figure 12). The first phase was performed by applying the new STRed index, while in the second phase the novel SwiRed index was implemented. Thus, the resultant maps of the proposed algorithm provided information on the same number of classes retrieved by NDBaI2 (Figure 13). However, LICA showed higher accuracy than NDBaI2, as demonstrated through the confusion matrix described in the following section, since misclassified pixels were drastically reduced.

3.2. Accuracy Assessment

Table 4, Table 5, Table 6, Table 7, Table 8, Table 9, Table 10, Table 11 and Table 12 provide OA, UA, and PA of resultant classification maps obtained through the computation of OSAVI, GOSAVI, and NDBIaI2 on the 12 atmospherically corrected Landsat data. On the contrary, just the best OA related to the outcomes generated by the remaining 78 indices are shown in Table 1. Although the best OA value was quiet high for the three indices (88.91, 89.89, and 82.59, respectively), their accuracy matrices bring out the difficulties encountered in classifying the study area, e.g., OSAVI incorrectly identified sparse vegetation pixels; similarly, NDBIaI2 can just detect 30% of pixels included in built-up areas. Therefore, although their results were satisfying, they cannot be used to extract accurate information related to the land cover of the experimental area.
Table 13, Table 14 and Table 15 describe the UA, PA, and AO of STRed index computed on the images acquired by Landsat 7, 5, and 8, respectively. STRed index performance was satisfying since the OA was higher than 80.95 for all the selected images. Moreover, UA and PA showed a satisfying value for all the data as well, regardless of the sensors and period under investigation. Indeed, their value was higher than 62.93 with the exception of UA (54.57) for the dense vegetation class extracted from the data acquired on 21 January 2002 (Landsat 7) (Table 13).
UA, PA, and AO of SwiRed index computed on the images acquired by Landsat 7, 5, and 8, respectively, are shown in Table 16, Table 17 and Table 18, respectively. SwiRed index shows satisfying outcomes as well. Indeed, the OA observed was higher than 85%, while UA and PA were on average equal to 72.56, except for the built-up areas extracted by Landsat5 on 25 August 2011 (58.21).

4. Discussion

This paper proposed a new classification algorithm to automatically extract LC/LU information from Landsat satellite open data: Landsat Images Classification Algorithm (LICA). Although no classification method showed optimal performances in all conditions, the index-based method was efficient and robust in detecting LC/LU classes in a short time using satellite images provided by several sources, as highlighted by [32]. Therefore, the index classification method was selected as the benchmark approach to develop the Landsat Images Classification Algorithm, introduced in this paper. LICA integrates two new indices, namely STRed and SwiRed, obtained by combining ad hoc spectral bands in order to classify the whole study area. As shown in Section 2.2, the selected bands were chosen by examining the literature review describing the role of each spectral band [44,45,46], the performance of 82 widely spread indices (listed in Table 1), and the specific spectral signature of each class existing in the study area, revealed in the experimental area under investigation. Therefore, the former index integrated Swir, Tir, and Red bands (Equation (4)), identifying water, mining areas, and sparse and dense vegetation (Figure 7); the latter, instead, combined Swir and Red bands (Equation (5)), distinguishing built-up areas (Figure 8). LICA was tested on 12 satellite images related to the experimental site of Siponto, an historical municipality in the Apulian Region, Southern Italy (Figure 1). One image for each season was selected from three Landsat missions (5, 7, and 8) for a total of 12 images. The 82 conventional indices were applied to the study area as well. Among them, just three traditional indices showed quite satisfying outcomes: OSAVI (Figure 4), GOSAVI (Figure 5), and NDBaI2 (Figure 6). However, the first two indices (OSAVI and GOSAVI) could just distinguish water, and sparse and dense vegetation, while the third, in addition to those, also identified built-up, bare soil, and mining areas. Their outcomes are confrontable with that one obtained by other research activities. OSAVI and GOSAVI belong to the vegetation indices (VIs) category and, therefore, they are aimed at identifying vegetation class [52,53]. VIs group is composed by many indices, which must be chosen according to the environmental features since each of them is suitable for meeting a specific purpose [130]. Commonly, Vis combining visible and NIR bands show a better sensitivity in detecting green areas [130]. This paper confirmed these assumptions; indeed, both GOSAVI and OSAVI integrated visible and NIR bands. Moreover, GOSAVI showed a higher accuracy than OSAVI, thanks to the introduction of the green band, which is more sensitive to the presence and vitality of vegetation [130]. Conversely, NDBaI was proposed to discriminate different LC/LU categories even if it showed some difficulties in recognizing the bare rock areas and in distinguishing agricultural from urban areas in the zones where the urban heat phenomenon is serious [131]. Therefore, it was partially modified and NDBaI2 was introduced to improve its performance. Here, both of them were able to classify the whole study area, even if the best OA of NDBaI2 (82.59) was higher than NDBaI OA (67.93) (Table 1). Nevertheless, NDBaI2’s accuracy was strongly influenced by its difficulties in distinguishing built-up areas and sparse vegetation (Table 10, Table 11 and Table 12) in all the collected images. The Automated Water Extraction Index (AWEI) was able to discriminate the different kinds of categories as well as NDBaI and NDBaI2, but its accuracy was considerably lower than NDBaI2 OA value. The worst performance was shown by Misra Yellow Vegetation Index (MYVI) [93] and Triangular Greenness Index (TGI) [112] since they were not able to discriminate any LC/LU categories in the experimental site (Table 1). MYVI was based on empirical methods without considering atmosphere-soil-vegetation interactions. Therefore, it was particularly affected by soil brightness, encountering some difficulties in extracting land cover information [130]. Although TGI was proposed to assess vegetation zones, it was strongly affected by the scale and by chlorophyll content, showing promising results only applying high-resolution images as input. This resulted in its inability in identifying vegetation using medium-resolution data provided by Landsat missions [132]. Moreover, although Automated Water Extraction Index (shadow version) (AWEIsh) and Ashburn Vegetation Index (AVI) had really high overall accuracy, equal to 91.46 and 99.78, respectively, they could detect a few of LC/LU classes: The former detected water and built-up areas, the latter detected only water.
In view of their performance, NDBAI2 was chosen as a base to develop the new algorithm. Thus, NDBaI2 and the proposed algorithm were the only ones able to extract the maximum number of LC/LU classes with a high overall accuracy. Moreover, LICA went beyond NDBaI2’s limitations: Both STRed and SwiRed showed a higher OA than NDBaI2, solving the issues encountered by the last one in classifying built-up areas and sparse vegetation (Table 13, Table 14, Table 15, Table 16, Table 17 and Table 18). This was due to the introduction of R band, required to improve index performance in detecting vegetated areas since R is sensitive to the energy absorbed by chlorophyll [52]. Moreover, SWIR and TIR1 bands were also combined to distinguish bare soil and built-up areas [53]. This resulted in an optimal OA of STRed and SwiRed, equal to 94.71% and 97.76%, respectively. Besides maximizing the number of categories to be detected and improving classification accuracy, LICA was designed in order to be applied on all Landsat missions, equipped with different sensors, so that multisensors, multitemporal, and multiseason analyses, which are essential in environmental monitoring and planning management, could be performed. Moreover, users can apply the whole algorithm or just one of the two proposed indices, according its needs.
To automatize LC/LU extraction, the algorithm was implemented in GEE environment, a new platform recently designed by Google (https://earthengine.google.com/). Thanks to its parallel processing capacity, already shown in previous research activities [20], LIC algorithms can be run in a few minutes, even if computation times increase with the amount of data to be handled. Therefore, using GEE allowed overcoming desktop system limitations due to excessive processing time needed to process geospatial big data. This paper confirmed the great potentiality of the GEE platform in processing geospatial big data, as already shown in previous research works [18,19,20,113].

5. Conclusions

In this study, an automated algorithm for extracting land cover information from multitemporal and multisensors open data in the GEE platform was introduced. The procedure did not need any external training datasets, which are time consuming (collection time must be considered) and may be affected by human errors. On the contrary, LICA used the integration of two novel indices (STRed and SwiRed) which allowed us to analyze land covers from Landsat images, maximizing the number of classes to be extracted and increasing classification accuracy, compared to the conventional indices commonly applied in literature. Landsat images were selected to test LICA in order to exploit the huge amount of open data available from 1972 and ensuring its reliability in multitemporal and multisensors analyses in order to provide information that could be used to perform land cover change analyses, which are essential to guide future planning strategies.
All computational steps were implemented in the GEE cloud computing platform, thereby avoiding the necessity of excessive desktop processing power to handle geospatial big data and automating the whole procedure. Therefore, the integration of the LIC algorithm and GEE environment allowed us to quickly extract accurate land cover information. Thus, adopting the proposed method helps to provide more contemporary information while also reducing costs, acquisition, and processing times.

Author Contributions

Conceptualization, E.T. and A.C.; methodology, A.C. and E.T.; data processing, C.M. and A.C.; validation, A.C.; writing—original draft preparation, A.C.; writing—review and editing, A.C. and E.T.; supervision, E.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

The authors would like to thank the anonymous reviewers for their constructive and valuable suggestions on the earlier drafts of this manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Waldner, F.; Fritz, S.; Gregorio, D.A.; Defourny, P. Mapping Priorities to Focus Cropland Mapping Activities: Fitness Assessment of Existing Global, Regional and National Cropland Maps. Remote Sens. 2015. [Google Scholar] [CrossRef] [Green Version]
  2. Potere, D.; Schneider, A.; Angel, S.; Civco, D.L. Mapping urban areas on a global scale: Which of the eight maps now available is more accurate? Int. J. Remote Sens. 2009, 30, 6531–6558. [Google Scholar] [CrossRef]
  3. Schneider, A.; Friedl, M.A.; Potere, D. Mapping global urban areas using MODIS 500-m data: New methods and datasets based on ‘urban ecoregions’. Remote Sens. Environ. 2010, 114, 1733–1746. [Google Scholar] [CrossRef]
  4. Fritz, S.; Bartholome, E.; Belward, A.; Hartley, A.; Stibig, H.; Eva, H.; Mayaux, P.; Bartalev, S.; Latifovic, R.; Kolmert, S. Harmonisation, Mosaicking and Production of the Global Land Cover 2000 Database; European Commission: Brussels, Belgium, 2003. [Google Scholar]
  5. Bartholomé, E.; Belward, A.S. GLC2000: A new approach to global land cover mapping from Earth observation data. Int. J. Remote Sens. 2005, 26, 1959–1977. [Google Scholar] [CrossRef]
  6. Arino, O.; Gross, D.; Ranera, F.; Bourg, L.; Leroy, M.; Bicheron, P.; Latham, J.; Gregorio, A.; Brockman, C.; Witt, R. GlobCover: ESA Service for Global Land Cover from MERIS. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, Barcelona, Spain, 23–28 July 2007; pp. 2412–2415. [Google Scholar]
  7. Bicheron, P.; Defourny, P.; Brockmann, C.; Schouten, L. Globcover: Products Description and Validation Report. 2008. Available online: https://core.ac.uk/download/pdf/11773712.pdf (accessed on 23 November 2019).
  8. Friedl, M.A.; Sulla-Menashe, D.; Tan, B.; Schneider, A.; Ramankutty, N.; Sibley, A.; Huang, X. MODIS Collection 5 global land cover: Algorithm refinements and characterization of new datasets. Remote Sens. Environ. 2010, 114, 168–182. [Google Scholar] [CrossRef]
  9. Loveland, T.R.; Reed, B.C.; Brown, J.F.; Ohlen, D.O.; Zhu, Z.; Yang, L.; Merchant, J.W. Development of a global land cover characteristics database and IGBP DISCover from 1 km AVHRR data. Int. J. Remote Sens. 2010, 21, 1303–1330. [Google Scholar] [CrossRef]
  10. Fritz, S.; See, L.; McCallum, I.; Schill, C.; Obersteiner, M.; van der Velde, M.; Boettcher, H.; Havlík, P.; Achard, F. Highlighting continued uncertainty in global land cover maps for the user community. Environ. Res. Lett. 2011, 6. [Google Scholar] [CrossRef]
  11. Ramankutty, N.; Evan, A.T.; Monfreda, C.; Foley, J.A. Farming the planet: 1. Geographic distribution of global agricultural lands in the year 2000. Glob. Biogeochem. Cycles 2008, 22. [Google Scholar] [CrossRef]
  12. Hansen, M.; Potapov, P.; Moore, R.; Hancher, M.; Turubanova, S.; Tyukavina, D.; Stehman, S.; Goetz, S.; Loveland, T.; Kommareddy, A. Observing the forest and the trees: The first high resolution global maps of forest cover change. Science 2013, 342, 850–853. [Google Scholar] [CrossRef] [Green Version]
  13. Patel, N.N.; Angiuli, E.; Gamba, P.; Gaughan, A.; Lisini, G.; Stevens, F.R.; Trianni, G. Multitemporal settlement and population mapping from Landsat using Google Earth Engine. Int. J. Appl. Earth Obs. Geoinf. 2015, 35, 199–208. [Google Scholar] [CrossRef] [Green Version]
  14. Gong, P.; Wang, J.; Yu, L.; Zhao, Y.; Zhao, Y.; Liang, L.; Niu, Z.; Huang, X.; Fu, H.; Liu, S. Finer resolution observation and monitoring of global land cover: First mapping results with Landsat TM and ETM+ data. Int. J. Remote Sens. 2013, 34, 2607–2654. [Google Scholar] [CrossRef] [Green Version]
  15. Chen, J.; Chen, J.; Liao, A.; Cao, X.; Chen, L.; Chen, X.; He, C.; Han, G.; Peng, S.; Lu, M. Global land cover mapping at 30 m resolution: A POK-based operational approach. ISPRS J. Photogramm. Remote Sens. 2015, 103, 7–27. [Google Scholar] [CrossRef] [Green Version]
  16. Liu, X.; Hu, G.; Chen, Y.; Li, X.; Xu, X.; Li, S.; Wang, S. High-resolution multi-temporal mapping of global urban land using Landsat images based on the Google Earth Engine Platform. Remote Sens. Environ. 2018, 209, 227–239. [Google Scholar] [CrossRef]
  17. Griffiths, G.H.; Lee, J. Landscape pattern and species richness; regional scale analysis from remote sensing. Int. J. Remote Sens. 2000, 21, 2685–2704. [Google Scholar] [CrossRef]
  18. Potapov, P.; Turubanova, S.; Hansen, M.C. Regional-scale boreal forest cover and change mapping using Landsat data composites for European Russia. Remote Sens. Environ. 2011, 115, 548–561. [Google Scholar] [CrossRef]
  19. Aquilino, M.; Tarantino, E.; Fratino, U. Multi-temporal land use analysis of AN ephemeral river area using an artificial neural network approach on landsat imagery. ISPRS Int. Arch. Photogramm. 2013, 1, 167–173. [Google Scholar] [CrossRef] [Green Version]
  20. Novelli, A.; Tarantino, E.; Caradonna, G.; Apollonio, C.; Balacco, G.; Piccinni, F. Improving the ANN Classification Accuracy of Landsat Data Through Spectral Indices and Linear Transformations (PCA and TCT) Aimed at LU/LC Monitoring of a River Basin. In International Conference on Computational Science and Its Applications; Springer: Cham, Switzerland, 2016; pp. 420–432. [Google Scholar]
  21. Li, W.; Dong, R.; Fu, H.; Wang, J.; Yu, L.; Gong, P. Integrating Google Earth imagery with Landsat data to improve 30-m resolution land cover mapping. Remote Sens. Environ. 2020, 237, 111563. [Google Scholar] [CrossRef]
  22. Mohammady, M.; Moradi, H.R.; Zeinivand, H.; Temme, A.J.A.M. A comparison of supervised, unsupervised and synthetic land use classification methods in the north of Iran. Int. J. Environ. Sci. Technol. 2015, 12, 1515–1526. [Google Scholar] [CrossRef] [Green Version]
  23. Andernach, M.; Wyss, D.; Kappas, M. An Evaluation of the Land Cover Classification Product Sentinel 2 Prototype Land Cover 20 m Map of Africa 2016 for Namibia. Namibian J. Environ. 2020, 4. [Google Scholar] [CrossRef]
  24. Stromann, O.; Nascetti, A.; Yousif, O.; Ban, Y. Dimensionality Reduction and Feature Selection for Object-Based Land Cover Classification based on Sentinel-1 and Sentinel-2 Time Series Using Google Earth Engine. Remote Sens. 2020, 12, 76. [Google Scholar] [CrossRef] [Green Version]
  25. Kumar LMutanga, O. Google Earth Engine Applications Since Inception: Usage, Trends, and Potential. Remote Sens. 2018, 10, 1509. [Google Scholar] [CrossRef] [Green Version]
  26. Gorelick, N.; Hancher, M.; Dixon, M.; Ilyushchenko, S.; Thau, D.; Moore, R. Google Earth Engine: Planetary-scale geospatial analysis for everyone. Remote Sens. Environ. 2017, 202, 18–27. [Google Scholar] [CrossRef]
  27. Chen, Y.; Gong, P. Clustering based on eigenspace transformation—CBEST for efficient classification. ISPRS J. Photogramm. Remote Sens. 2013, 83, 64–80. [Google Scholar] [CrossRef]
  28. Susaki, J.; Shibasaki, R. Maximum likelihood method modified in estimating a prior probability and in improving misclassification errors. Int. Arch. Photogramm. Remote Sens. 2000, 33, 1499–1504. [Google Scholar]
  29. Abdi, A.M. Land cover and land use classification performance of machine learning algorithms in a boreal landscape using Sentinel-2 data. GIScience Remote Sens. 2020, 57, 1–20. [Google Scholar] [CrossRef] [Green Version]
  30. Capolupo, A.; Kooistra, L.; Boccia, L. A novel approach for detecting agricultural terraced landscapes from historical and contemporaneous photogrammetric aerial photos. Int. J. Appl. Earth Obs. Geoinf. 2018, 73, 800–810. [Google Scholar] [CrossRef]
  31. Crocetto, N.; Tarantino, E. A class-oriented strategy for features extraction from multidate ASTER imagery. Remote Sens. 2009, 1, 1171–1189. [Google Scholar] [CrossRef] [Green Version]
  32. Tarantino, E.; Figorito, B. Mapping rural areas with widespread plastic covered vineyards using true color aerial data. Remote Sens. 2012, 4, 1913–1928. [Google Scholar] [CrossRef] [Green Version]
  33. Novelli, A.; Aguilar, M.A.; Nemmaoui, A.; Aguilar, F.J.; Tarantino, E. Performance evaluation of object based greenhouse detection from Sentinel-2 MSI and Landsat 8 OLI data: A case study from Almería. Int. J. Appl. Earth Obs. Geoinf. 2016, 52, 403–411. [Google Scholar] [CrossRef] [Green Version]
  34. Gómez, C.; White, J.C.; Wulder, M.A. Optical remotely sensed time series data for land cover classification: A review. ISPRS J. Photogramm. Remote Sens. 2016, 116, 55–72. [Google Scholar] [CrossRef] [Green Version]
  35. Pal, M.; Rasmussen, T.; Porwal, A. Optimized Lithological Mapping from Multispectral and Hyperspectral Remote Sensing Images Using Fused Multi-Classifiers. Remote Sens. 2020, 12, 177. [Google Scholar] [CrossRef] [Green Version]
  36. Chen, J.; Gong, P.; He, C.; Pu, R.; Shi, P. Land-use/land-cover change detection using improved change-vector analysis. Photogramm. Eng. Remote Sens. 2003, 69, 369–380. [Google Scholar] [CrossRef] [Green Version]
  37. Mas, J.F.; Flores, J.J. The application of artificial neural networks to the analysis of remotely sensed data. Int. J. Remote Sens. 2008, 29, 617–663. [Google Scholar] [CrossRef]
  38. Mountrakis, G.; Im, J.; Ogole, C. Support vector machines in remote sensing: A review. ISPRS J. Photogramm. Remote Sens. 2011, 66, 247–259. [Google Scholar] [CrossRef]
  39. Belgiu, M.; Drăguţ, L. Random forest in remote sensing: A review of applications and future directions. ISPRS J. Photogramm. Remote Sens. 2016, 114, 24–31. [Google Scholar] [CrossRef]
  40. Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef] [Green Version]
  41. Srivastava, P.K.; Han, D.; Ramirez, M.R.; Islam, T. Machine learning techniques for downscaling SMOS satellite soil moisture using MODIS land surface temperature for hydrological application. Water Resour. Manag. 2013, 27, 3127–3144. [Google Scholar] [CrossRef]
  42. Shao, Y.; Lunetta, R.S. Comparison of support vector machine, neural network, and CART algorithms for the land-cover classification using limited training data points. ISPRS J. Photogramm. Remote Sens. 2012, 70, 78–87. [Google Scholar] [CrossRef]
  43. Burnett, C.; Blaschke, T. A multi-scale segmentation/object relationship modelling methodology for landscape analysis. Ecol. Model. 2003, 168, 233–249. [Google Scholar] [CrossRef]
  44. Whiteside, T.G.; Boggs, G.S.; Maier, S.W. Comparing object-based and pixel-based classifications for mapping savannas. Int. J. Appl. Earth Obs. Geoinf. 2011, 136, 884–893. [Google Scholar] [CrossRef]
  45. Homer, C.; Huang, C.; Yang, L.; Wylie, B.; Coan, M. Development of a 2001 national land cover database for the United States. Photogramm. Eng. Remote Sens. 2004, 70, 829–840. [Google Scholar] [CrossRef] [Green Version]
  46. Anchang, J.Y.; Ananga, E.O.; Pu, R. An efficient unsupervised index based approach for mapping urban vegetation from IKONOS imagery. Int. J. Appl. Earth Observ. Geoinform. 2016, 50, 211–220. [Google Scholar] [CrossRef]
  47. Tucker, C.J. Red and photographic infrared linear combinations for monitoring vegetation. Remote Sens. Environ. 1979, 8, 127–150. [Google Scholar] [CrossRef] [Green Version]
  48. Kazakis, N.; Kougias, I.; Patsialis, T. Assessment of flood hazard areas at a regional scale using an index-based approach and Analytical Hierarchy Process: Application in Rhodope–Evros region, Greece. Sci. Total Environ. 2015, 538, 555–563. [Google Scholar] [CrossRef] [PubMed]
  49. De Martini, P.M.; Burrato, P.; Pantosti, D.; Maramai, A.; Graziani, L.; Abramson, H. Identification of tsunami deposits and liquefaction features in the Gargano area (Italy): Paleo seismological implication. Ann. Geophys. 2003, 45. [Google Scholar] [CrossRef]
  50. Petrillo, A.F. Aree costiere: Attuali e future criticità. Geologi e Territorio. Periodico dell’Ordine dei Geologi della Puglia 2007, 3–4, 117–130. [Google Scholar]
  51. Yusuf, B.; He, Y. Application of hyperspectral imaging sensor to differentiate between the moisture and reflectance of healthy and infected tobacco leaves. Afr. J. Agric. Res. 2011, 6, 6267–6280. [Google Scholar]
  52. Rondeaux, G.; Steven, M.; Baret, F. Optimization of soil-adjusted vegetation indices. Remote Sens. Environ. 1996, 55, 95–107. [Google Scholar] [CrossRef]
  53. Sripada, R.P.; Heiniger, R.W.; White, J.G.; Meijer, A.D. Aerial color infrared photography for determining early in-season nitrogen requirements in corn. Agron. J. 2006, 98, 968–977. [Google Scholar] [CrossRef]
  54. Li, S.; Chen, X. A new bare-soil index for rapid mapping developing areas using landsat 8 data. The International Archives of Photogrammetry. Remote Sens. Spat. Inf. Sci. 2014, 40, 139. [Google Scholar] [CrossRef] [Green Version]
  55. Southworth, J. An assessment of Landsat TM band 6 thermal data for analysing land cover in tropical dry forest regions. Int. J. Remote Sens. 2004, 25, 689–706. [Google Scholar] [CrossRef]
  56. Rouse, J., Jr.; Haas, R.H.; Schell, J.A.; Deering, D.W. Monitoring Vegetation Systems in the Great Plains with ERTS; NASA: Washington, DC, USA, 1974.
  57. Brivio, P.; Lechi, G.; Zilioli, E. Principi e Metodi di Telerilevamento; CittaStudi: Milan, Italy, 2006; pp. 1–525. [Google Scholar]
  58. Capolupo, A.; Kooistra, L.; Berendonk, C.; Boccia, L.; Suomalainen, J. Estimating plant traits of grasslands from UAV-acquired hyperspectral images: A comparison of statistical approaches. ISPRS Int. J. Geo-Inf. 2015, 4, 2792–2820. [Google Scholar] [CrossRef]
  59. Lyon, J.G.; Yuan, D.; Lunetta, R.S.; Elvidge, C.D. A change detection experiment using vegetation indices. Photogramm. Eng. Remote Sens. 1998, 64, 143–150. [Google Scholar]
  60. Karnieli, A.; Kaufman, Y.J.; Remer, L.; Wald, A. AFRI—Aerosol free vegetation index. Remote Sens. Environ. 2001, 77, 10–21. [Google Scholar] [CrossRef]
  61. Kaufman, Y.J.; Tanre, D. Atmospherically resistant vegetation index (ARVI) for EOS-MODIS. IEEE Trans. Geosci. Remote Sens. 1992, 30, 261–270. [Google Scholar] [CrossRef]
  62. Jackson, R.D.; Slater, P.N.; Pinter, P.J. Adjusting the tasselled-cap brightness and greenness factors for atmospheric path radiance and absorption on a pixel by pixel basis. Int. J. Remote Sens. 1983, 2, 313–323. [Google Scholar] [CrossRef]
  63. Ashburn, P. The Vegetative Index Number and Crop Identification. In Proceedings of the Technical Session of the LACIE Symposium, Houston, TX, USA, 23–26 October 1978; pp. 843–856. [Google Scholar]
  64. Feyisa, G.L.; Meilby, H.; Fensholt, R.; Proud, S.R. Automated Water Extraction Index: A new technique for surface water mapping using Landsat imagery. Remote Sens. Environ. 2014, 140, 23–35. [Google Scholar] [CrossRef]
  65. Bouzekri, S.; Lasbet, A.A.; Lachehab, A. A new spectral index for extraction of built-up area using Landsat-8 data. J. Indian Soc. Remote Sens. 2015, 43, 867–873. [Google Scholar] [CrossRef]
  66. Deng, C.; Wu, C. BCI: A biophysical composition index for remote sensing of urban environments. Remote Sens. Environ. 2012, 127, 247–259. [Google Scholar] [CrossRef]
  67. Bouhennache, R.; Bouden, T.; Taleb-Ahmed, A.; Cheddad, A. A new spectral index for the extraction of built-up land features from Landsat 8 satellite imagery. Geocarto Int. 2019, 34, 1531–1551. [Google Scholar] [CrossRef]
  68. Luo, N.; Wan, T.; Hao, H.; Lu, Q. Fusing high-spatial-resolution remotely sensed imagery and OpenStreetMap data for land cover classification over urban areas. Remote Sens. 2019, 11, 88. [Google Scholar] [CrossRef] [Green Version]
  69. Kaimaris, D.; Patias, P. Identification and area measurement of the built-up area with the built-up index (BUI). Int. J. Adv. Remote Sens. GIS 2016, 5, 1844–1858. [Google Scholar] [CrossRef] [Green Version]
  70. Zhang, S.; Yang, K.; Li, M.; Ma, Y.; Sun, M. Combinational biophysical composition index (CBCI) for effective mapping biophysical composition in urban areas. IEEE Access 2018, 6, 41224–41237. [Google Scholar] [CrossRef]
  71. Gitelson, A.A.; Gritz, Y.; Merzlyak, M.N. Relationships between leaf chlorophyll content and spectral reflectance and algorithms for non-destructive chlorophyll assessment in higher plant leaves. J. Plant Physiol. 2003, 160, 271–282. [Google Scholar] [CrossRef] [PubMed]
  72. Davies, D.L.; Bouldin, D.W. A clustering separation measure. IEEE Trans. Pattern Anal. Mach. Intell. 1979, 1, 224–227. [Google Scholar] [CrossRef] [PubMed]
  73. Rasul, A.; Balzter, H.; Ibrahim, G.R.F.; Hameed, H.M.; Wheeler, J.; Adamu, B.; Najmaddin, P.M. Applying built-up and bare-soil indices from landsat 8 to cities in dry climates. Land 2018, 7, 81. [Google Scholar] [CrossRef] [Green Version]
  74. Tucker, C.J. A spectral method for determining the percentage of green herbage material in clipped samples. Remote Sens. Environ. 1980, 9, 175–181. [Google Scholar] [CrossRef]
  75. As-syakur, A.; Adnyana, I.; Arthana, I.W.; Nuarsa, I.W. Enhanced built-up and bareness index (EBBI) for mapping built-up and bare land in an urban area. Remote Sens. 2012, 4, 2957–2970. [Google Scholar] [CrossRef] [Green Version]
  76. Chen, J.; Yang, K.; Chen, S.; Yang, C.; Zhang, S.; He, L. Enhanced normalized difference index for impervious surface area estimation at the plateau basin scale. J. Appl. Remote Sens. 2019, 13. [Google Scholar] [CrossRef] [Green Version]
  77. Matsushita, B.; Yang, W.; Chen, J.; Onda, Y.; Qiu, G. Sensitivity of the enhanced vegetation index (EVI) and normalized difference vegetation index (NDVI) to topographic effects: A case study in high-density cypress forest. Sensors 2007, 7, 2636–2651. [Google Scholar] [CrossRef] [Green Version]
  78. Gitelson, A.A.; Kaufman, Y.J.; Merzlyak, M.N. Use of a green channel in remote sensing of global vegetation from EOS-MODIS. Remote Sens. Environ. 1996, 58, 289–298. [Google Scholar] [CrossRef]
  79. Zheng, Q.; Zeng, Y.; Deng, J.; Wang, K.; Jiang, R.; Ye, Z. “Ghost cities” identification using multi-source remote sensing datasets: A case study in Yangtze River Delta. Appl. Geogr. 2017, 80, 112–121. [Google Scholar] [CrossRef]
  80. Wu, W. The generalized difference vegetation index (GDVI) for dryland characterization. Remote Sens. 2014, 6, 1211–1233. [Google Scholar] [CrossRef] [Green Version]
  81. Pinty, B.; Verstraete, M.M. GEMI: A non-linear index to monitor global vegetation from satellites. Vegetatio 1992, 101, 15–20. [Google Scholar] [CrossRef]
  82. Louhaichi, M.; Borman, M.M.; Johnson, D.E. Spatially located platform and aerial photography for documentation of grazing impacts on wheat. Geocarto Int. 2001, 16, 65–70. [Google Scholar] [CrossRef]
  83. Motohka, T.; Nasahara, K.N.; Oguma, H.; Tsuchida, S. Applicability of green-red vegetation index for remote sensing of vegetation phenology. Remote Sens. 2010, 2, 2369–2387. [Google Scholar] [CrossRef] [Green Version]
  84. Jackson, R.D. Spectral indices in n-space. Remote Sens. Environ. 1983, 13, 409–421. [Google Scholar] [CrossRef]
  85. Xu, H. A new index-based built-up index (IBI) and its eco-environmental significance. Remote Sens. Technol. Appl. 2011, 22, 301–308. [Google Scholar]
  86. Crippen, R.E. Calculating the vegetation index faster. Remote Sens. Environ. 1990, 34, 71–73. [Google Scholar] [CrossRef]
  87. Haboudane, D.; Miller, J.R.; Pattey, E.; Zarco-Tejada, P.J.; Strachan, I.B. Hyperspectral vegetation indices and novel algorithms for predicting green LAI of crop canopies: Modeling and validation in the context of precision agriculture. Remote Sens. Environ. 2004, 90, 337–352. [Google Scholar] [CrossRef]
  88. Gobron, N.; Pinty, B.; Verstraete, M.; Govaerts, Y. The MERIS Global Vegetation Index (MGVI): Description and preliminary application. Int. J. Remote Sens. 1999, 20, 1917–1927. [Google Scholar] [CrossRef]
  89. Fall, A.G.U. Snow Monitoring Using Remote Sensing Data: Modification of Normalized Difference Snow Index. In Proceedings of the AGU Fall Meeting, San Francisco, CA, USA, 12–16 December 2016. [Google Scholar]
  90. Xu, H. Modification of normalised difference water index (NDWI) to enhance open water features in remotely sensed imagery. Int. J. Remote Sens. 2006, 27, 3025–3033. [Google Scholar] [CrossRef]
  91. Gong, P.; Pu, R.; Biging, G.S.; Larrieu, M.R. Estimation of forest leaf area index using vegetation indices derived from hyperion hyperspectral data. IEEE Trans. Geosci. Remote Sens. 2003, 40, 1355–1362. [Google Scholar] [CrossRef] [Green Version]
  92. Qi, J.; Chehbouni, A.; Huete, A.R.; Kerr, Y.H.; Sorooshian, S. A modified soil adjusted vegetation index. Remote Sens. Environ. 1994, 48, 119–126. [Google Scholar] [CrossRef]
  93. Misra, P.N.; Wheeler, S.G.; Oliver, R.E. Kauth-Thomas Brigthness and Greenness Axes; NASA: Washington, DC, USA, 1997.
  94. Chen, J.M. Evaluation of vegetation indices and a modified simple ratio for boreal applications. Can. J. Remote Sens. 1996, 22, 229–242. [Google Scholar] [CrossRef]
  95. Chen, X.L.; Zhao, H.; Li, P.; Yin, Z. Remote sensing image-based analysis of the relationship between urban heat island and land use/cover changes. Remote Sens. Environ. 2006, 104, 133–146. [Google Scholar] [CrossRef]
  96. Li, H.; Wang, C.; Zhong, C.; Su, A.; Xiong, C.; Wang, J.; Liu, J. Mapping urban bare land automatically from Landsat imagery with a simple index. Remote Sens. 2017, 9, 249. [Google Scholar] [CrossRef] [Green Version]
  97. Sinha, P.; Verma, N.K. Urban built-up area extraction and change detection of adama municipal area using time-series landsat images. Int. J. Adv. Remote Sens. GIS 2016, 58, 1886–1895. [Google Scholar] [CrossRef]
  98. Vescovo, L.; Gianelle, D. Using the MIR bands in vegetation indices for the estimation of grassland biophysical parameters from satellite remote sensing in the Alps region of Trentino. Adv. Space Res. 2008, 41, 1764–1772. [Google Scholar] [CrossRef]
  99. Zha, Y.; Gao, J.; Ni, S. Use of normalized difference built-up index in automatically mapping urban areas from TM imagery. Int. J. Remote Sens. 2003, 24, 583–594. [Google Scholar] [CrossRef]
  100. Xu, H. Analysis of impervious surface and its impact on urban heat environment using the normalized difference impervious surface index (NDISI). Photogramm. Eng. Remote Sens. 2010, 76, 557–565. [Google Scholar] [CrossRef]
  101. Jin, S.; Sader, S.A. Comparison of time series tasseled cap wetness and the normalized difference moisture index in detecting forest disturbances. Remote Sens. Environ. 2005, 94, 364–372. [Google Scholar] [CrossRef]
  102. Van Deventer, A.P.; Ward, A.D.; Gowda, P.H.; Lyon, J.G. Using Thematic Mapper data to identify contrasting soil plains and tillage practices. Photogramm. Eng. Remote Sens. 1997, 63, 87–93. [Google Scholar]
  103. McFeeters, S.K. The use of the Normalized Difference Water Index (NDWI) in the delineation of open water features. Int. J. Remote Sens. 1996, 17, 1425–1432. [Google Scholar] [CrossRef]
  104. Goel, N.S.; Qin, W. Influences of canopy architecture on relationships between various vegetation indices and LAI and FPAR: A computer simulation. Remote Sens. Rev. 1994, 10, 309–347. [Google Scholar] [CrossRef]
  105. Roujean, J.L.; Breon, F.M. Estimating PAR absorbed by vegetation from bidirectional reflectance measurements. Remote Sens. Environ. 1995, 51, 375–384. [Google Scholar] [CrossRef]
  106. Pearson, R.L.; Miller, L.D. Remote Mapping of Standing Crop Biomass for Estimation of the Productivity of the Shortgrass Prairie, Pawnee National Grasslands, Colorado. In Proceedings of the 8th International Symposium on Remote Sensing of the Environment II, Ann Arbor, MI, USA, 2–6 October 1972; pp. 1355–1379. [Google Scholar]
  107. Huete, A.R. A soil-adjusted vegetation index (SAVI). Remote Sens. Environ. 1988, 25, 295–309. [Google Scholar] [CrossRef]
  108. Thompson, D.R.; Wehmanen, O.A. Using Landsat digital data to detect moisture stress in corn-soybean growing regions. Photogramm. Eng. Remote Sens. 1980, 46, 1087–1093. [Google Scholar]
  109. Lymburner, L.; Beggs, P.J.; Jacobson, C.R. Estimation of canopy-average surface-specific leaf area using Landsat TM data. Photogramm. Eng. Remote Sens. 2000, 66, 183–192. [Google Scholar]
  110. Jordan, C.F. Derivation of leaf area index from quality of light on the forest floor. Ecology 1969, 50, 663–666. [Google Scholar] [CrossRef]
  111. Bandari, A.; Asalhi, H.; Teillet, P.M. Transformed Difference Vegetation Index (TDVI) for Vegetation Cover Mapping. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, Toronto, ON, Canada, 24–28 June 2002; pp. 3053–3055. [Google Scholar]
  112. Broge, N.H.; Leblanc, E. Comparing prediction power and stability of broadband and hyperspectral vegetation indices for estimation of green leaf area index and canopy chlorophyll density. Remote Sens. Environ. 2000, 76, 156–172. [Google Scholar] [CrossRef]
  113. Hunt, E.R., Jr.; Doraiswamy, P.C.; McMurtrey, J.E.; Daughtry, C.S.; Perry, E.M.; Akhmedov, B. A visible band index for remote sensing leaf chlorophyll content at the canopy scale. Int. J. Appl. Earth Obs. Geoinf. 2013, 21, 103–112. [Google Scholar] [CrossRef] [Green Version]
  114. Kawamura, M. Relation between Social and Environmental Conditions in Colombo Sri Lanka and the Urban Index Estimated by Satellite Remote Sensing Data. In Proceedings of the 51st Annual Conference of the Japan Society of Civil Engineers, Nagoya, Japan, September 1996; pp. 190–191. Available online: https://ci.nii.ac.jp/naid/10003189515/ (accessed on 27 December 2019).
  115. Gittelson, A.A.; Stark, R.; Grits, U.; Rundquist, D.; Kaufman, Y.; Derry, D. Vegetation and soil lines in visible spectral space: A concept and technique for remote estimation of vegetation fraction. Int. J. Remote Sens. 2001, 23, 2537–2562. [Google Scholar] [CrossRef]
  116. Liu, F.; Liu, S.H.; Xiang, Y. Study on remote sensing monitoring of vegetation coverage in the field. Trans. Csam 2014, 4511, 250–257. [Google Scholar]
  117. Stathakis, D.; Perakis, K.; Savin, I. Efficient segmentation of urban areas by the VIBI. Int. J. Remote Sens. 2012, 33, 6361–6377. [Google Scholar] [CrossRef]
  118. Lobell, D.B.; Asner, G.P. Hyperion Studies of Crop Stress in Mexico; NASA: Washington, DC, USA, 2004.
  119. Sakamoto, T.; Gitelson, A.A.; Wardlow, B.D.; Verma, S.B.; Suyker, A.E. Estimating daily gross primary production of maize based only on MODIS WDRVI and shortwave radiation data. Remote Sens. Environ. 2011, 115, 3091–3101. [Google Scholar] [CrossRef]
  120. Fisher, A.; Flood, N.; Danaher, T. Comparing Landsat water index methods for automated water classification in eastern Australia. Remote Sens. Environ. 2016, 175, 167–182. [Google Scholar] [CrossRef]
  121. Wolf, A.F. Using WorldView-2 Vis-NIR multispectral imagery to support land mapping and feature extraction using normalized difference index ratios. SPIE Def. Secur. Sens. 2012, 8390, 83900. [Google Scholar]
  122. Kauth, R.J.; Thomas, G.S. The Tasselled Cap—A Graphic Description of the Spectraltemporal Development of Agricultural Crops as Seen by Landsat. In Symposium on Machine Processing of Remotely Sensed Data; Purdue University: West Lafayette, Indiana, 1976; pp. 41–51. [Google Scholar]
  123. Mateo-García, G.; Gómez-Chova, L.; Amorós-López, J.; Muñoz-Marí, J.; Camps-Valls, G. Multitemporal cloud masking in the Google Earth Engine. Remote Sens. 2018, 10, 1079. [Google Scholar] [CrossRef] [Green Version]
  124. Sidhu, N.; Pebesma, E.; Câmara, G. Using Google Earth Engine to detect land cover change: Singapore as a use case. Eur. J. Remote Sens. 2018, 51, 486–500. [Google Scholar] [CrossRef]
  125. Pengra, B.; Long, J.; Dahal, D.; Stehman, S.V.; Loveland, T.R. A global reference database from very high-resolution commercial satellite data and methodology for application to Landsat derived 30 m continuous field tree cover data. Remote Sens. Environ. 2015, 165, 234–248. [Google Scholar] [CrossRef]
  126. Stehman, S.V.; Woodcock, C.E.; Sulla-Menashe, D.; Sibley, A.M.; Newell, J.D.; Friedl, M.A.; Herold, M. A global land-cover validation data set. part I: Fundamental design principles. Int. J. Remote Sens. 2012, 33, 5768–5788. [Google Scholar]
  127. Caprioli, M.; Tarantino, E. Accuracy assessment of per-field classification integrating very fine spatial resolution satellite imagery with topographic data. J. Geospat. Eng. 2001, 3, 127–134. [Google Scholar]
  128. Xue, J.; Su, B. Significant remote sensing vegetation indices: A review of developments and applications. J. Sens. 2017, 2017, 1–17. [Google Scholar] [CrossRef] [Green Version]
  129. Zhao, H.; Chen, X. Use of Normalized Difference Bareness Index in Quickly Mapping Bare Areas from TM/ETM+. Int. Geosci. Remote Sens. Symp. 2005, 3, 1666. Available online: https://www.researchgate.net/profile/Hong_Mei_Zhao/publication/4183057_Use_of_normalized_difference_bareness_index_in_quickly_mapping_bare_areas_from_TMETM/links/5a3440a0a6fdcc769fd235f5/Use-of-normalized-difference-bareness-index-in-quickly-mapping-bare-areas-from-TM-ETM.pdf (accessed on 4 January 2020).
  130. Brennan, R.; Webster, T.L. Object-oriented land cover classification of lidar-derived surfaces. Can. J. Remote Sens. 2006, 32, 162–172. [Google Scholar] [CrossRef]
  131. Chandra, P. Performance evaluation of vegetation indices using remotely sensed data. Int. J. Geomat. Geosci. 2011, 2, 231–240. [Google Scholar]
  132. Pindozzi, S.; Cervelli, E.; Capolupo, A.; Okello, C.; Boccia, L. Using historical maps to analyze two hundred years of land cover changes: Case study of Sorrento peninsula. Cartogr. Geogr. Inf. Sci. 2016, 43, 250–265. [Google Scholar] [CrossRef]
Figure 1. Study area.
Figure 1. Study area.
Remotesensing 12 01201 g001
Figure 2. Spectral signature of each class detected in the study area. The x and y axes report the original Landsat bands (Landsat 7 - Enhanced Thematic Mapper Plus (ETM+) and the surface reflectance, respectively, where 1 is band 1 (blue), 2 is band 2 (green), 3 is band 3 (red), 4 is band 4 (near infrared), 5 is band 5 (shortwave infrared 1), 6 is band 6 (thermal infrared), and 7 is band 7 (shortwave infrared 2).
Figure 2. Spectral signature of each class detected in the study area. The x and y axes report the original Landsat bands (Landsat 7 - Enhanced Thematic Mapper Plus (ETM+) and the surface reflectance, respectively, where 1 is band 1 (blue), 2 is band 2 (green), 3 is band 3 (red), 4 is band 4 (near infrared), 5 is band 5 (shortwave infrared 1), 6 is band 6 (thermal infrared), and 7 is band 7 (shortwave infrared 2).
Remotesensing 12 01201 g002
Figure 3. Landsat Images Classification Algorithm (LICA) classification workflow.
Figure 3. Landsat Images Classification Algorithm (LICA) classification workflow.
Remotesensing 12 01201 g003
Figure 4. Optimized Soil Adjusted Vegetation Index (OSAVI) classification outcome for Landsat 8 (image acquired on 17 March 2019). On the right, zoomed images with some misclassifications.
Figure 4. Optimized Soil Adjusted Vegetation Index (OSAVI) classification outcome for Landsat 8 (image acquired on 17 March 2019). On the right, zoomed images with some misclassifications.
Remotesensing 12 01201 g004
Figure 5. Green Optimized Soil Adjusted Vegetation Index (GOSAVI) classification outcome for Landsat 8 (image acquired on 17 March 2019). On the right, zoomed images with some misclassifications.
Figure 5. Green Optimized Soil Adjusted Vegetation Index (GOSAVI) classification outcome for Landsat 8 (image acquired on 17 March 2019). On the right, zoomed images with some misclassifications.
Remotesensing 12 01201 g005
Figure 6. Normalized Difference Bareness Index 2 (NDBaI2) classification outcome for Landsat 8 (image acquired on 17 March 2019). On the right, zoomed images with some misclassifications.
Figure 6. Normalized Difference Bareness Index 2 (NDBaI2) classification outcome for Landsat 8 (image acquired on 17 March 2019). On the right, zoomed images with some misclassifications.
Remotesensing 12 01201 g006
Figure 7. SwirTiRed index (STRed) classification outcome for Landsat 5 (image acquired on 27 March 2011). On the right, zoomed images with some misclassifications.
Figure 7. SwirTiRed index (STRed) classification outcome for Landsat 5 (image acquired on 27 March 2011). On the right, zoomed images with some misclassifications.
Remotesensing 12 01201 g007
Figure 8. SwirTiRed index (STRed) classification outcome for Landsat 5 (image acquired on 5 October 2011). On the right, zoomed images with some misclassifications.
Figure 8. SwirTiRed index (STRed) classification outcome for Landsat 5 (image acquired on 5 October 2011). On the right, zoomed images with some misclassifications.
Remotesensing 12 01201 g008
Figure 9. SwirTiRed index (STRed) classification outcome for Landsat 7 (image acquired on 14 April 2003). On the right, zoomed images with some misclassifications.
Figure 9. SwirTiRed index (STRed) classification outcome for Landsat 7 (image acquired on 14 April 2003). On the right, zoomed images with some misclassifications.
Remotesensing 12 01201 g009
Figure 10. SwiRed index (SwiRed) classification outcome for Landsat 8 (image acquired on 17 March 2019). On the right, zoomed images with some misclassifications.
Figure 10. SwiRed index (SwiRed) classification outcome for Landsat 8 (image acquired on 17 March 2019). On the right, zoomed images with some misclassifications.
Remotesensing 12 01201 g010
Figure 11. SwiRed index (SwiRed) classification outcome for Landsat 5 (image acquired on 5 October 2011). On the right, zoomed images with some misclassifications.
Figure 11. SwiRed index (SwiRed) classification outcome for Landsat 5 (image acquired on 5 October 2011). On the right, zoomed images with some misclassifications.
Remotesensing 12 01201 g011
Figure 12. SwiRed index (SwiRed) classification outcome for Landsat 7 (image acquired on 14 April 2003). On the right, zoomed images with some misclassifications.
Figure 12. SwiRed index (SwiRed) classification outcome for Landsat 7 (image acquired on 14 April 2003). On the right, zoomed images with some misclassifications.
Remotesensing 12 01201 g012
Figure 13. Landsat Images Classification Algorithm (LICA) classification outcome for Landsat 8 (image acquired on 17 March 2019), Landsat 7 (image acquired on 6 April 2003), Landsat 5 (image acquired on 27 March 2011).
Figure 13. Landsat Images Classification Algorithm (LICA) classification outcome for Landsat 8 (image acquired on 17 March 2019), Landsat 7 (image acquired on 6 April 2003), Landsat 5 (image acquired on 27 March 2011).
Remotesensing 12 01201 g013
Table 1. Main commonly used classification indices listed in alphabetical order. Indices in bold show the best performance. LC/LU column describes the land cover/land use classes detected from each index. OA column reports the best overall accuracy of each index. LC/LU, land cover/land use; OA: overall accuracy; W, water; DV, dense vegetation; SV, sparse vegetation; MA, mining areas; BS, bare soil; BUA: built-up area; *, water mask is required; -:, no classes were detected.
Table 1. Main commonly used classification indices listed in alphabetical order. Indices in bold show the best performance. LC/LU column describes the land cover/land use classes detected from each index. OA column reports the best overall accuracy of each index. LC/LU, land cover/land use; OA: overall accuracy; W, water; DV, dense vegetation; SV, sparse vegetation; MA, mining areas; BS, bare soil; BUA: built-up area; *, water mask is required; -:, no classes were detected.
Spectral IndexCitationLC/LUOA (%)
Aerosol Free Vegetation Index version 1.6 (AFRI1.6)[60]DV, SV72.24
Aerosol Free Vegetation Index version 2.1 (AFRI2.1)[60]DV, SV86.02
Atmospherically resistant vegetation index (ARVI)[61]W, DV, SV, BUA, BS59.97
Adjusted Soil Brightness Index (ASBI) *[62]DV, SV66.70
Ashburn Vegetation Index (AVI)[63]W99.78
Automated Water Extraction Index (AWEI)[64]W, DV, SV, BUA, MA, BS68.04
Automated Water Extraction Index (shadow version) (AWEIsh)[64]W, BUA91.46
Build-area extraction index (BAEI) *[65]DV, SV, BUA63.60
Biophysical Composition Index (BCI)[66]W, DV, SV68.23
Built-up Land Features Extraction Index (BLFEI)[67]W, DV, SV, BUA, BS72.03
Bare Soil Index (BSI) *[68]DV, SV73.62
Built-up land (BUI)[69]W, DV, SV69.81
Combinational Biophysical Composition Index (CBCI)[70]DV, SV67.22
Green Chlorophyll Index (CI)[71]W, DV, SV68.40
Davies-Bouldin index (DBI)[72]W, DV, SV, BUA, BS70.59
Dry Bare-Soil Index (DBSI) *[73]DV, SV68.47
Simple Difference Indices (DVI)[74]W, DV, SV69.85
Enhanced Built-up and Bareness Index (EBBI)[75]W, DV, SV64.93
Enhanced Normalized Difference Impervious Surfaces Index (ENDISI)[76]DV, SV BUA, MA67.55
Enhanced Vegetation Index (EVI)[77]W, DV, SV, BUA, BS58.59
Green Atmospherically Resistant Vegetation Index (GARI)[78]W, DV, SV, BUA, BS69.78
“Ghost cities” Index (GCI)[79]W, DV, SV, BUA, BS71.26
Green Difference Vegetation Index (GDVI)[80]W, DV, SV70.59
Global Environment Monitoring Index (GEMI)[81]W, DV, SV67.74
Green leaf index (GLI)[82]DV, SV66.70
Green Normalized Difference Vegetation Index (GNDVI)[78]W, DV, SV, BUA, BS72.48
Green Optimized Soil Adjusted Vegetation Index (GOSAVI)[53]W, DV, SV89.89
Green-Red Vegetation Index (GRVI)[83]W, DV, SV, BUA, BS71.26
Green Soil Adjusted Vegetation Index (GSAVI)[53]W, DV, SV, BUA, BS73.91
Green Vegetation Index (GVI) *[84]DV, SV, BUA57.30
Built-up Index (IBI)[85]DV, SV74.75
Infrared Percentage Vegetation Index (IPVI)[86]W, DV, SV, BUA, BS69.10
Modified Bare Soil Index (MBSI)[70]W, DV, SV73.22
Modified Chlorophyll Absorption Ratio Index1 (MCARI1)[87]DV, SV64.28
Modified Chlorophyll Absorption Ratio Index (MCARI2)[87]W, DV, SV, BUA, BS82.24
MERIS Global Vegetation Index (MGVI)[88]W, DV, SV76.88
Modification of Normalized Difference Snow Index (MNDSI)[89]W, MA76.55
Modification of normalized difference water index (MNDWI)[90]W, BUA74.62
Modified Nonlinear Vegetation Index (MNLI)[91]W, DV, SV77.40
Modified Soil Adjusted Vegetation Index 2 (MSAVI2)[92]W, DV, SV, BUA, BS83.30
Misra Soil Brightness Index (MSBI)[93]W, DV, SV, BUA, BS78.56
Modified Simple Ratio (MSR)[94]W, DV, SV, BUA, BS67.03
Misra Yellow Vegetation Index (MYVI)[93]--
New Built-up Index (NBI) *[95]DV, SV, BUA, MA, BS71.46
Normalized Difference Bare Land Index (NBLI) *[96]DV, SV, BUA, MA, BS75.51
New Built-up Index (NBUI)[97]W, DV, SV76.39
Normalized Canopy Index (NCI)[98]W, BUA78.34
Normalized Difference Bareness Index (NDBaI)[54]W, DV, SV, BUA, MA, BS67.93
Normalized Difference Bareness Index (version 2) (NDBaI2)[54]W, DV, SV, BUA, MA, BS82.59
Normalized Difference Built-up Index (NDBI)[99]DV, SV71.14
Normalized Difference Impervious Surface Index (NDISI)[100]W, MA97.60
Normalized Difference Moisture Index (NDMI) *[101]DV, SV73.47
Normalized Difference Tillage Index (NDTI) *[102]DV, SV71.57
Normalized Difference Vegetation Index (NDVI)[56]W, DV, SV, BUA, BS73.24
Normalized Difference Water Index (NDWI)[103]W, DV, SV, BUA, BS73.54
Non-Linear Index (NLI)[104]W, DV, SV76.63
Optimized Soil Adjusted Vegetation Index (OSAVI)[52]W, DV, SV88.84
Renormalized Difference Vegetation Index (RDVI)[105]W, DV, SV77.34
Ratio Vegetation Index (RVI)[106]W, DV, SV, BUA, BS72.30
Soil-Adjusted Vegetation Index (SAVI)[107]W, DV, SV72.04
Soil Brightness Index (SBI)[108]W, BUA, MA80.27
Specific Leaf Area Vegetation Index (SLAVI)[109]W, DV, SV83.56
Simple Ratio (SR)[110]W, DV, SV68.93
Transformed difference vegetation index (TDVI)[111]W99.81
Triangular Greenness Index (TGI)[112]--
Triangular Vegetation Index (TVI)[113]W, DV, SV74.15
Urban Index (UI)[114]BUA76.66
Visible Atmospherically Resistant Index (VARI)[115]W, DV, SV68.34
Visible-Band Difference Vegetation Index (VDVI)[116]DV, SV66.70
Vegetation Index of Biotic Integrity (VIBI)[117]DV, SV66.57
Wide Dynamic Range Vegetation Index (WDRVI)[118]W, DV, SV78.87
Water index 2015 (WI2015)[119]W99.81
Worldview Improved Vegetative Index (WV-VI)[120]W, DV, SV, BUA, BS75.47
Yellow Stuff Index (YVI) *[121]DV, SV66.70
Table 2. Range value of LICA to extract the different land cover classes.
Table 2. Range value of LICA to extract the different land cover classes.
Land Cover ClassRange value (SwiRed)
Built- up areas0 < value < 0.22
Land Cover ClassRange value (STRed)
Watervalue < −0.5
Dense vegetation−0.05 < value < −0.07
Sparse vegetation0.07 < value < 0.00
Mining areasValue > 0.45
Table 3. Selected Landsat data description. ETM+, enhanced thematic mapper; TM, thematic mapper; OLI-TIRS, operational land imager - thermal infrared.
Table 3. Selected Landsat data description. ETM+, enhanced thematic mapper; TM, thematic mapper; OLI-TIRS, operational land imager - thermal infrared.
IDLandsat Satellite MissionSensorLandsat ImagesAcquisition DateAverage Cloud Cover (%)
1Landsat 7ETM+LE07_L1TP_188031_20020121_2017021321 January 20024
2LE07_L1TP_188031_20020801_2017021301 August 20026
3LE07_L1TP_189031_2002127_2017012827 October 20021
4LE07_L1TP_188031_20030414_2017012614 April 20034
1Landsat 5TMLT05_L1TP_188031_20110207_2016101007 February 20111
2LT05_L1TP_188031_20110327_2016120927 March 201116
3LT05_L1TP_189031_20110825_2016100825 August 20110
4LT05_L1TP_188031_20111005_2016100505 October 20111
1Landsat 8OLI-TIRSLC08_L1TP_188031_20171208_2017122308 December 20171.69
2LC08_L1TP_189031_20180812_2018081512 August 20188.1
3LC08_L1TP_188031_20180922_2018092822 September 20182.41
4LC08_L1TP_188031_20190925_20191101717 March 201919.46
Table 4. OA, PA, and UA obtained through the application of Optimized Soil Adjusted Vegetation Index (OSAVI) on the data acquired by Landsat 7 mission. UA, user’s accuracy; PA, producer’s accuracy; OA, overall accuracy.
Table 4. OA, PA, and UA obtained through the application of Optimized Soil Adjusted Vegetation Index (OSAVI) on the data acquired by Landsat 7 mission. UA, user’s accuracy; PA, producer’s accuracy; OA, overall accuracy.
L7—21 January 2002L7—14 April 2003L7—01 August 2002L7—27 October 2002
Land Cover ClassPA (%)UA (%)PA (%)UA (%)PA (%)UA (%)PA (%)UA (%)
Water95.79100.0095.04100.0090.1198.9995.6099.24
Dense Vegetation79.5787.4072.3886.2170.9798.0595.5645.84
Sparse Vegetation73.8989.7890.5581.9125.1433.9649.3067.03
Not classified100.0078.38100.0092.2891.1362.4379.4798.17
Mining Areas////////
Bare Soil////////
Built-up areas////////
OA (%)87.8388.8474.0278.23
Table 5. OA, PA, and UA obtained through the application of Optimized Soil Adjusted Vegetation Index (OSAVI) on the data acquired by Landsat 5 mission. UA, user’s accuracy; PA, producer’s accuracy; OA, overall accuracy.
Table 5. OA, PA, and UA obtained through the application of Optimized Soil Adjusted Vegetation Index (OSAVI) on the data acquired by Landsat 5 mission. UA, user’s accuracy; PA, producer’s accuracy; OA, overall accuracy.
L5—07 February 2011L5—27 March 2011L5—25 August 2011L5—05 October 2011
Land Cover ClassPA (%)UA (%)PA (%)UA (%)PA (%)UA (%)PA (%)UA (%)
Water95.0593.3589.73100.0090.3099.0566.3096.02
Dense Vegetation64.1653.2766.4978.4448.5493.5580.5469.00
Sparse Vegetation35.3658.6776.2581.0643.0551.9067.9280.36
Not classified92.2170.75100.0071.8881.9040.8993.8372.77
Mining Areas////////
Bare Soil////////
Built-up areas////////
OA (%)68.7184.8472.4477.98
Table 6. OA, PA, and UA obtained through the application of Optimized Soil Adjusted Vegetation Index (OSAVI) on the data acquired by Landsat 8 mission. UA, user’s accuracy; PA, producer’s accuracy; OA, overall accuracy.
Table 6. OA, PA, and UA obtained through the application of Optimized Soil Adjusted Vegetation Index (OSAVI) on the data acquired by Landsat 8 mission. UA, user’s accuracy; PA, producer’s accuracy; OA, overall accuracy.
L8—08 December 2008L8—12 August 2018L8—22 September 2018L8—17 March 2019
Land Cover ClassPA (%)UA (%)PA (%)UA (%)PA (%)UA (%)PA (%)UA (%)
Water99.13100.0098.1999.8589.0199.5397.30100.00
Dense Vegetation76.9375.3959.0991.9586.4996.6080.8573.47
Sparse Vegetation22.9152.0027.3451.2781.7482.8255.0471.57
Not classified99.4670.6099.8165.3393.9883.79100.0079.54
Mining Areas////////
Bare Soil////////
Built-up areas////////
OA (%)81.4181.0088.9183.56
Table 7. OA, PA, and UA obtained through the application of Green Optimized Soil Adjusted Vegetation Index (GOSAVI) on the data acquired by Landsat 7 mission. UA, user’s accuracy; PA, producer’s accuracy; OA, overall accuracy.
Table 7. OA, PA, and UA obtained through the application of Green Optimized Soil Adjusted Vegetation Index (GOSAVI) on the data acquired by Landsat 7 mission. UA, user’s accuracy; PA, producer’s accuracy; OA, overall accuracy.
L7—21 January 2002L7—14 April 2003L7—01 August 2002L7—27 October 2002
Land Cover ClassPA (%)UA (%)PA (%)UA (%)PA (%)UA (%)PA (%)UA (%)
Water96.34100.0096.70100.0086.26100.0096.1597.22
Dense Vegetation60.5788.4870.2291.1870.9793.1286.6944.79
Sparse Vegetation65.7883.8394.6776.0518.2339.0552.0971.39
Not classified100.0070.5789.1695.5297.2159.5083.0997.50
Mining Areas////////
Bare Soil////////
Built-up areas////////
OA (%)82.8687.7473.5779.12
Table 8. OA, PA, and UA obtained through the application of Green Optimized Soil Adjusted Vegetation Index (GOSAVI) on the data acquired by Landsat 5 mission. UA, user’s accuracy; PA, producer’s accuracy; OA, overall accuracy.
Table 8. OA, PA, and UA obtained through the application of Green Optimized Soil Adjusted Vegetation Index (GOSAVI) on the data acquired by Landsat 5 mission. UA, user’s accuracy; PA, producer’s accuracy; OA, overall accuracy.
L5—07 February 2011L5—March 27 2011L5—25 August 2011L5—05 October 2011
Land Cover ClassPA (%)UA (%)PA (%)UA (%)PA (%)UA (%)PA (%)UA (%)
Water94.32100.0095.73100.0093.5658.4674.9197.61
Dense Vegetation60.0443.5163.9888.1552.3089.2981.7154.40
Sparse Vegetation25.0335.3989.3181.7443.0540.2559.6285.87
Not classified59.5152.1799.58100.0090.2643.9095.4177.60
Mining Areas////////
Bare Soil////////
Built-up areas////////
OA (%)55.2289.8976.3778.82
Table 9. OA, PA, and UA obtained through the application of Green Optimized Soil Adjusted Vegetation Index (GOSAVI) on the data acquired by Landsat 8 mission. UA, user’s accuracy; PA, producer’s accuracy; OA, overall accuracy.
Table 9. OA, PA, and UA obtained through the application of Green Optimized Soil Adjusted Vegetation Index (GOSAVI) on the data acquired by Landsat 8 mission. UA, user’s accuracy; PA, producer’s accuracy; OA, overall accuracy.
L8—08 December 2008L8—12 August 2018L8—22 September 2018L8—17 March 2019
Land Cover ClassPA (%)UA (%)PA (%)UA (%)PA (%)UA (%)PA (%)UA (%)
Water99.49100.0098.19100.0090.8498.4199.81100.00
Dense Vegetation75.6275.2959.7289.0284.8089.6492.8863.74
Sparse Vegetation40.5024.2325.2043.5675.0087.3417.9461.86
Not classified99.4666.95100.0066.3197.9584.62100.0099.74
Mining Areas////////
Bare Soil////////
Built-up areas////////
OA (%)79.3280.8989.1580.35
Table 10. OA, PA, and UA obtained through the application of Normalized Difference Bareness Index (version 2) (NDBIaI2) on the data acquired by Landsat 7 mission. UA, user’s accuracy; PA, producer’s accuracy; OA, overall accuracy.
Table 10. OA, PA, and UA obtained through the application of Normalized Difference Bareness Index (version 2) (NDBIaI2) on the data acquired by Landsat 7 mission. UA, user’s accuracy; PA, producer’s accuracy; OA, overall accuracy.
L7—21 January 2002L7—14 April 2003L7—01 August 2002L7—27 October 2002
Land Cover ClassPA (%)UA (%)PA (%)UA (%)PA (%)UA (%)PA (%)UA (%)
Water95.24100.0097.07100.0087.55100.0095.24100.00
Dense Vegetation69.8954.3281.4895.1441.9450.6189.1179.78
Sparse Vegetation19.6251.7552.3675.6669.3460.1926.8427.27
Not classified////////
Mining Areas30.9692.5577.5894.7890.2095.0464.7795.25
Bare Soil94.3563.8887.1762.5491.3379.7169.3164.17
Built-up areas27.1831.9841.0437.0431.7948.4446.1546.88
OA (%)66.8976.1076.2366.61
Table 11. OA, PA, and UA obtained through the application of Normalized Difference Bareness Index (version 2) (NDBIaI2) on the data acquired by Landsat 5 mission. UA, user’s accuracy; PA, producer’s accuracy; OA, overall accuracy.
Table 11. OA, PA, and UA obtained through the application of Normalized Difference Bareness Index (version 2) (NDBIaI2) on the data acquired by Landsat 5 mission. UA, user’s accuracy; PA, producer’s accuracy; OA, overall accuracy.
L5—07 February 2011L5—27 March 2011L5—25 August 2011L5—05 October 2011
Land Cover ClassPA (%)UA (%)PA (%)UA (%)PA (%)UA (%)PA (%)UA (%)
Water96.89100.0090.59100.0082.49100.0084.25100.00
Dense Vegetation86.5653.9174.7373.4262.9745.4753.7054.55
Sparse Vegetation54.7684.4238.2465.0578.6378.6348.1157.41
Not classified////////
Mining Areas56.8699.3286.2798.6584.7199.0882.5697.89
Bare Soil93.8189.1792.1359.3892.5083.8879.3358.56
Built-up areas28.2126.1916.9231.4342.8238.6634.8745.26
OA (%)78.9575.0780.2861.55
Table 12. OA, PA, and UA obtained through the application of Normalized Difference Bareness Index (version 2) (NDBIaI2) on the data acquired by Landsat 8 mission. UA: UA, user’s accuracy; PA, producer’s accuracy; OA, overall accuracy.
Table 12. OA, PA, and UA obtained through the application of Normalized Difference Bareness Index (version 2) (NDBIaI2) on the data acquired by Landsat 8 mission. UA: UA, user’s accuracy; PA, producer’s accuracy; OA, overall accuracy.
L8—08 December 2008L8—12 August 2018L8—22 September 2018L8—17 March 2019
Land Cover ClassPA (%)UA (%)PA (%)UA (%)PA (%)UA (%)PA (%)UA (%)
Water98.55100.00100.0099.9393.7799.0396.72100.00
Dense Vegetation91.5357.1032.4570.4139.8677.1298.8980.93
Sparse Vegetation27.4850.3069.5962.0236.9648.6037.5971.83
Not classified////////
Mining Areas41.96100.0081.1899.0474.0299.5279.92100.00
Bare Soil93.7287.3493.3680.6788.8832.1630.1439.25
Built-up areas29.7439.2726.4132.5420.0038.3170.1923.10
OA (%)80.2082.5966.3172.04
Table 13. OA, PA, and UA obtained computing STRed on the images acquired by Landsat 7 mission. UA, user’s accuracy; PA, producer’s accuracy; OA, overall accuracy.
Table 13. OA, PA, and UA obtained computing STRed on the images acquired by Landsat 7 mission. UA, user’s accuracy; PA, producer’s accuracy; OA, overall accuracy.
L7—21 January 2002L7—14 April 2003L7—01 August 2002L7—27 October 2002
Land Cover ClassPA (%)UA (%)PA (%)UA (%)PA (%)UA (%)PA (%)UA (%)
Water95.7997.3998.80100.0098.35100.0099.1596.88
Sparse Vegetation85.7569.3772.5692.8162.9874.5192.9593.78
Dense Vegetation76.7654.5798.6689.8475.6569.5284.0876.41
Mining areas81.57100.0075.1199.4197.6588.2546.4497.18
Not classified85.6795.7899.7380.6289.6672.9061.9383.63
OA (%)86.4994.3080.9587.73
Table 14. OA, PA, and UA obtained computing STRed on the images acquired by Landsat 5 mission. UA, user’s accuracy; PA, producer’s accuracy; OA, overall accuracy.
Table 14. OA, PA, and UA obtained computing STRed on the images acquired by Landsat 5 mission. UA, user’s accuracy; PA, producer’s accuracy; OA, overall accuracy.
L5—07 February 2011L5—27 March 2011L5—25 August 2011L5—05 October 2011
Land Cover ClassPA (%)UA (%)PA (%)UA (%)PA (%)UA (%)PA (%)UA (%)
Water99.2798.9198.72100.0094.8699.6297.38100.00
Sparse Vegetation81.4090.7891.4595.6589.6994.6368.6455.10
Dense Vegetation80.1180.5498.3191.2684.9478.9996.0482.20
Mining areas69.8099.4498.6799.1187.45100.0094.2299.53
Not classified99.3280.8799.3197.1799.4789.6186.8378.42
OA (%)87.8897.7693.2085.83
Table 15. OA, PA, and UA obtained accuracy computing STRed on the images acquired by Landsat 8 mission. UA, user’s accuracy; PA, producer’s accuracy; OA, overall accuracy.
Table 15. OA, PA, and UA obtained accuracy computing STRed on the images acquired by Landsat 8 mission. UA, user’s accuracy; PA, producer’s accuracy; OA, overall accuracy.
L8—08 December 2008L8—12 August 2018L8—22 September 2018L8—17 March 2019
Land Cover ClassPA (%)UA (%)PA (%)UA (%)PA (%)UA (%)PA (%)UA (%)
Water98.6397.9297.68100.0098.65100.0099.43100.00
Sparse Vegetation94.2197.9389.5987.1369.2972.7393.5398.64
Dense Vegetation86.2892.9273.0174.7273.7486.1399.1793.93
Mining areas84.5196.5382.7598.6061.56100.0098.2297.79
Not classified99.5556.0899.5379.52100.0053.0498.8999.19
OA (%)93.3387.9385.0898.71
Table 16. OA, PA, and UA obtained computing SwiRed index on the images acquired by Landsat 7 mission. UA, user’s accuracy; PA, producer’s accuracy; OA, overall accuracy.
Table 16. OA, PA, and UA obtained computing SwiRed index on the images acquired by Landsat 7 mission. UA, user’s accuracy; PA, producer’s accuracy; OA, overall accuracy.
L7—21 January 2002L7—14 April 2003L7—01 August 2002L7—27 October 2002
Land Cover ClassPA (%)UA (%)PA (%)UA (%)PA (%)UA (%)PA (%)UA (%)
Built-up areas72.8282.7590.4573.7774.1074.1080.0083.88
Not classified98.2796.0593.8889.0797.8897.3297.5597.20
OA (%)91.3189.2892.7794.30
Table 17. OA, PA, and UA obtained computing SwiRed index on the images acquired by Landsat 5 mission. UA, user’s accuracy; PA, producer’s accuracy; OA, overall accuracy.
Table 17. OA, PA, and UA obtained computing SwiRed index on the images acquired by Landsat 5 mission. UA, user’s accuracy; PA, producer’s accuracy; OA, overall accuracy.
L5—07 February 2011L5—27 March 2011L5—25 August 2011L5—05 October 2011
Land Cover ClassPA (%)UA (%)PA (%)UA (%)PA (%)UA (%)PA (%)UA (%)
Built-up areas77.4471.7976.1172.5658.2187.0486.1179.81
Not classified93.9093.9797.4390.5098.0793.0997.3797.43
OA (%)91.0389.8094.0095.56
Table 18. OA, PA, and UA obtained computing SwiRed index on the images acquired by Landsat 8 mission. UA, user’s accuracy; PA, producer’s accuracy; OA, overall accuracy.
Table 18. OA, PA, and UA obtained computing SwiRed index on the images acquired by Landsat 8 mission. UA, user’s accuracy; PA, producer’s accuracy; OA, overall accuracy.
L8—08 December 2008L8—12 August 2018L8—22 September 2018L8—17 March 2019
Land Cover ClassPA (%)UA (%)PA (%)UA (%)PA (%)UA (%)PA (%)UA (%)
Built-up areas76.9284.7372.5677.7085.0072.2674.7880.51
Not classified97.6998.0778.8196.0197.4993.7397.7392.74
OA (%)94.7185.3191.4091.20

Share and Cite

MDPI and ACS Style

Capolupo, A.; Monterisi, C.; Tarantino, E. Landsat Images Classification Algorithm (LICA) to Automatically Extract Land Cover Information in Google Earth Engine Environment. Remote Sens. 2020, 12, 1201. https://doi.org/10.3390/rs12071201

AMA Style

Capolupo A, Monterisi C, Tarantino E. Landsat Images Classification Algorithm (LICA) to Automatically Extract Land Cover Information in Google Earth Engine Environment. Remote Sensing. 2020; 12(7):1201. https://doi.org/10.3390/rs12071201

Chicago/Turabian Style

Capolupo, Alessandra, Cristina Monterisi, and Eufemia Tarantino. 2020. "Landsat Images Classification Algorithm (LICA) to Automatically Extract Land Cover Information in Google Earth Engine Environment" Remote Sensing 12, no. 7: 1201. https://doi.org/10.3390/rs12071201

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop