Next Article in Journal
Investigating the Impact of Digital Elevation Models on Sentinel-1 Backscatter and Coherence Observations
Next Article in Special Issue
Analysis of Salt Lake Volume Dynamics Using Sentinel-1 Based SBAS Measurements: A Case Study of Lake Tuz, Turkey
Previous Article in Journal
Improved Remote Sensing Methods to Detect Northern Wild Rice (Zizania palustris L.)
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

State of Science Assessment of Remote Sensing of Great Lakes Coastal Wetlands: Responding to an Operational Requirement

1
Environment and Climate Change Canada, National Wildlife Research Centre, Ottawa, ON L7S 1A1, Canada
2
Kim Geomatics Corporation, Box 1125, Manotick, ON K4M 1A9, Canada
*
Author to whom correspondence should be addressed.
Remote Sens. 2020, 12(18), 3024; https://doi.org/10.3390/rs12183024
Submission received: 29 July 2020 / Revised: 11 September 2020 / Accepted: 13 September 2020 / Published: 16 September 2020
(This article belongs to the Special Issue Remote Sensing for Wetland Inventory, Mapping and Change Analysis)

Abstract

:
The purpose of this research was to develop a state of science synthesis of remote sensing technologies that could be used to track changes in Great Lakes coastal vegetation for the Great Lakes-St. Lawrence River Adaptive Management (GLAM) Committee. The mapping requirements included a minimum mapping unit (MMU) of either 2 × 2 m or 4 × 4 m, a digital elevation model (DEM) accuracy in x and y of 2 m, a “z” value or vertical accuracy of 1–5 cm, and an accuracy of 90% for the classes of interest. To determine the appropriate remote sensing sensors, we conducted an extensive literature review. The required high degree of accuracy resulted in the elimination of many of the remote sensing sensors used in other wetland mapping applications including synthetic aperture radar (SAR) and optical imagery with a resolution >1 m. Our research showed that remote sensing sensors that could at least partially detect the different types of wetland vegetation in this study were the following types: (1) advanced airborne “coastal” Airborne Light Detection and Ranging (LiDAR) with either a multispectral or a hyperspectral sensor, (2) colour-infrared aerial photography (airplane) with (optimum) 8 cm resolution, (3) colour-infrared unmanned aerial vehicle (UAV) photography with vertical accuracy determination rated at 10 cm, (4) colour-infrared UAV photography with high vertical accuracy determination rated at 3–5 cm, (5) airborne hyperspectral imagery, and (6) very high-resolution optical satellite data with better than 1 m resolution.

Graphical Abstract

1. Problem and Objectives

Great Lake coastal wetlands play an important role in the Great Lakes ecosystem by protecting shorelines, trapping sediments, maintaining water quality, and providing habitat for several species of plants and animals. In 2018, the Great Lakes-St. Lawrence River Adaptive Management (GLAM) Committee wanted to develop an appropriate strategy for tracking changes in Great Lakes coastal wetland vegetation and to determine how those changes could be related to changes in the Great Lakes water level management strategies. The meadow marsh response, in particular, was seen as important because meadow marsh is especially sensitive to water level changes. Wetland vegetation within the meadow marsh class are intolerant to long periods of flooding, but periodic flooding is necessary to stop aggressive emergent plants from taking over. This wetland class also occurs in a narrow hydrologic range as compared with other wetland classes [1]. However, changes in other wetland vegetation zones are also considered to be important. The fact that the type of wetland vegetation that has developed in the Great Lakes is linked to the historical high and low water levels, specific elevation ranges and water levels are indicators of the type of wetland vegetation that can be present [1]. Thus, elevation and water level measurements are also considered to be valuable for the GLAM. In the past, the GLAM used predictive models to determine how vegetation changes were impacted by changing water levels. The GLAM is now focusing on validating the results from these predictive models under observed water level conditions. Traditionally, the GLAM has used targeted on site-scale field sampling to measure responses in wetland vegetation to water level changes. This method provides very detailed information; however, it is expensive in terms of economic and human resources, and there are challenges scaling to other coastal wetlands. During a coastal wetlands expert meeting in 2017, the GLAM recommended investigating remote sensing options to potentially: (1) increase spatial coverage of the monitoring, (2) be used for wetland change detection, and (3) be included in a long-term strategy for tracking wetland response to water level changes. Therefore, to form the basis for future planning, the GLAM Committee needs to develop a state of science synthesis of remote sensing options for their specific needs, as it currently does not exist. This paper describes both the development of this synthesis and the results.
The objective was to develop a state of science assessment of remote sensing of Great Lakes coastal wetlands. The assessment was to provide a comprehensive summary of available scientific methods for wetland monitoring using various means of remote sensing. The scope included a detailed assessment of potential options to improve topographic and bathymetric elevation estimates, define wetland extent, and differentiating vegetation communities within wetlands.

2. Materials and Methods

2.1. Defining Wetland Parameters of Interest

Some of the lessons learned in performing scans and evaluations of the capabilities of remote sensing in the past have been the need to clearly understand who needs the information (the stakeholders), what information is required and how is it going to be used, what the minimum mapping unit (MMU) is, and what accuracy is required to identify and map the attribute of interest in order to be able to determine if remote sensing can be used to meet specific mapping objectives. We conducted meetings with members of the GLAM Committee and other wetland experts on the Great Lakes and through discussions determined that the focus would be on assessing the potential of remote sensing to identify the optimum approach, or approaches, to monitor and separate the following six vegetation classes: (1) transition to uplands, (2) meadow marsh, (3) typha (cattail), (4) miscellaneous mixed emergent (non-persistent emergent), (5) mixed emergent, and (6) floating or submerged aquatic vegetation. A description of each class can be found in Table 1. Meadow marsh and changes in its area and borders were identified as the key variables to monitor because the meadow marsh class is especially vulnerable to water level changes and consists of a large variety of vegetation species [1]. Changes in boundaries of two meters or more should be identifiable and the minimum mapping unit (MMU) would be either 2 × 2 m or 4 × 4 m. The required horizontal accuracy of a digital elevation model (DEM) accuracy in x and y was to be 2 m. Although the optimal vertical accuracy, also known as elevation or “z” value, was 1 cm, one stakeholder suggested that 5 cm would be acceptable. In this context, the vertical accuracy referred to bare earth elevation. The MMU should be correct and 90% of the surface area should be put into the correct class. The thematic, or classification, accuracy required was assumed to be 90%, i.e., 90% of all decisions made by the classification algorithm or interpretation method for each MMU should be correct and 90% of the surface area should be put into the correct class.

2.2. Determine What Literature to Review

A key question is, “What literature should be reviewed?” The contributors to [11] listed some 1400 citations. Demonstrating that wetlands are a worldwide concern, Guo et al. [12] cross-referenced wetland and remote sensing using the Science Citation Index Extended (SCIE) database from Web of Science. They identified 5719 papers for their review. For this study, both [11,12] were reviewed, as were a number of papers provided by the stakeholders. This base of some 25 papers and books led to others related to wetlands and remote sensing in the Great Lakes. In the end, over 1000 papers were scanned, 125 papers were thoroughly examined, and the results were summarized.

2.3. Evaluate Tools and Processes: Caveats, Users’ Needs, Constraints, and Benefits of Remote Sensing

A review or state of science assessment should always begin by defining what is and is not part of the review. This was not a comprehensive review of all research and development in wetlands remote sensing. This study targeted work on wetlands that was related to, or seemed in some way to meet, the information needs specific to the stakeholders associated with the Great Lakes in general and Lake Ontario and the St. Lawrence in particular. Therefore, the focus was on North American wetlands work in general, and the Great Lakes Basin in particular. It largely ignored tidal wetlands, salt-water wetlands, tropical wetlands, prairie pothole wetlands, wetlands in the far north, and those in Mediterranean climates.
Although most of the papers reviewed noted many benefits associated with using remote sensing for wetland monitoring, virtually everyone began with a commentary on the difficulties associated with using remote sensing. We chose to begin this paper with a discussion of some of the constraints to provide a realistic setting for this review before identifying the benefits of using remote sensing. In this paper, the lists of identified benefits and constraints identified are meant to be an indication of the issues associated with using remote sensing in complex wetland environments, however, the lists are by no means complete.
Remote sensing is not a “silver bullet” that solves all data problems, however, when it is used properly and in the right circumstances, it can solve many of the problems of those who need information about the surface of the Earth. For this project, only the research, by those who seemed to be mindful of the constraints faced, was reviewed. Some of the many constraints or issues associated with the use of remote sensing (RS) data and technologies that must be kept in mind are the following:
  • The hammer and nail syndrome suggests if the only tool you have is a hammer, then every problem looks like a nail. Some of those with access to only one or another of the RS data types would tend to use that data type to try to obtain good results, whether or not it is the best data type for the purpose. Trying to forcibly obtain results using a certain RS data type, whether or not it is the best data to begin with, was the sort of bias considered when reviewing the literature.
  • Calculating and evaluating accuracies can be problematic due to a lack of high quality validation data and highly variable approaches to assessing accuracy from one study to the next.
  • The desire to obtain good results sometimes led some researchers to over generalize the data classes to a degree that it rendered them useless.
  • Remote sensing almost always requires well planned and well executed fieldwork for training interpreters or for “training” computer image analysis systems, as well as for verification of results.
  • The term “ground truth” is often used to refer to data collection in the field. The “awful truth about ground truth” is that sometimes the “true” information is collected by the remote sensor, not by those on the ground, especially if the ground data are lacking, largely incomplete, located imprecisely, collected by inexperienced individuals, or not collected at the same time as the remote sensing data.
  • Accuracies can vary greatly depending upon the interpreter’s skill and understanding of what is being interpreted. For example, wetland experts, not urban planners, should be interpreting imagery over wetlands.
  • Success in small research and development test sites does not always translate into fully operational applications over much larger areas.
  • Costs for operational programs are often difficult to estimate without soliciting information through a formal “Request for Information” or “Request for Quote”. Although, in some cases, RS data can be freely available, purchasing the appropriate RS data can often be quite expensive, especially if doing so over large areas or at high resolution.
  • Access to large amounts of RS data and advanced algorithms and powerful processing systems does not guarentee useful results.
  • Not all users have access to the same quality and quantity of data types due to government policies, funding, etc., which can lead to problems getting appropriate data at the right time if wetlands are considered to be less important than other applications.
  • Cloud cover can prevent the acquisition of optical RS data.
  • It is often difficult to acquire multisensor RS data on the same (or even similar) dates.
  • It is often difficult to coordinate remote sensing data acquisition with ground data collection.
  • Some vegetation types cannot be distinguished among surrounding but different vegetation types when looked at from above, although on the ground they are quite obviously different.
  • Some RS data available for research purposes, today or in the past, may not be available for operational use in the future.
  • The fact that many researchers are assessing the same approach does not necessarily mean that the approach is a valid one. There is sometimes a herd instinct when it comes to assessing new approaches to image analysis, i.e., one researcher attempts a new approach, obtains interesting results, and many others soon follow.
As there are many constraints on the effective use of remote sensing, there are also many potential (but not always realized) benefits of using the technology. The use of remote sensing can lead to the following:
  • Reduction of fieldwork, and in theory a reduction in costs;
  • Mapping larger areas and conducted faster and at lower costs;
  • Effective and standardized monitoring of change over time;
  • Quantitative measures of past and current conditions;
  • The creation of data to test models; and
  • A better understanding of the local environment or geography of an area.

3. Literature and Technology Review

3.1. Leading to the Recommended Remote Sensing Tools

Introduction: Sensors and Platforms and Processing Approaches

The basic GLAM requirements that combined the identification of both marsh meadow and changes in the extent of marsh meadow, several other wetland types, as well as a minimum mapping unit of either 4 square meters or 16 square meters, quickly eliminated most of the remote sensing tools examined. The elimination usually occurred without the additional 90% accuracy requirement. The rest of this review explains both the limitations and usefulness of the tools reviewed in the context of the GLAM provisions, but also discusses how remote sensing can be used more generally to study wetlands in the Great Lakes Basin. The sensors and processing approaches included in this review are listed in Table 2.

3.2. Sensors and Platforms

3.2.1. Introduction

The sensors and platforms discussed below are those that we deemed to be potentially useful to address the GLAM requirements. In this paper, the fact that each sensor and platform is discussed separately, mirrors one of the problems in the field. All too often researchers (and reviewers) tend to focus on just one of the sensors available or one of the platforms available. In fact, recent research, including some reviewed here, has found that combinations of two or more data types and approaches could lead to a better solution [9,13,14,15]. Indeed, this multisensor approach has been seen in some recent commercial offerings of direct relevance to this study. Teledyne Optech, for example, combined the value of their Airborne Light Detection and Ranging (LiDAR) sensor with a Compact Airborne Spectrographic Imager CASI hyperspectral sensor.

3.2.2. Synthetic Aperture Radar

Synthetic aperture radar (SAR) is one of the sensors often mentioned and recommended for wetland mapping [16]. It is important to note that SAR systems, as well as SAR data processing and interpretation can be complex. To effectively set up a SAR monitoring system, one must understand the physics of the entire process and how the SAR signal interacts with the features being studied. Long before SAR data from satellites were routinely available, airborne SAR was studied for wetlands and related work in Asia, Africa, and Canada, in the early-mid 1990s [17]. In the first major manual on imaging radar applications, work on wetlands was well represented [18].
In addition to the complexities of SAR data, there are other limiting factors associated with its application for meadow marsh monitoring and other wetland classes of interest to the GLAM. The authors of [19] explained that, because of the inherent “speckle” in SAR data, one needs larger areas for training for the analysis, a factor that further reduces the resolution and the effective MMU that is possible with SAR data.
A useful review of wetland remote sensing by [19] summarized both optical and SAR work (including 200 references) and the value of polarized SAR data in particular. They noted that SAR data were useful in that the data could penetrate cloud cover. This is a distinct advantage over optical data, all other factors being equal. However, they also pointed out that SAR data often required more time-consuming preprocessing of the data and this required more advanced understanding of SAR processing and the physics of the interactions between the sensor and the feature of interest. In wetlands, these relationships are complex, as has been noted by other authors whose work involved SAR data and wetlands, including [13,16,20,21,22]. Thus, although SAR can improve the mapping accuracy of some wetland classes, without the required equipment and personnel, SAR cannot be effectively utilized.
Radar comes in several wavelengths with significant differences among them for wetlands applications. X-band (3.75–7.5 cm), C-band (2.4–3.75 cm), and L-band (15−30 cm) SAR data are most commonly used for wetland applications, with X-band having the shortest wavelength and L-band the longest. Here, most of the SAR data referred to have been C-band data, which have been shown to be useful for mapping surface water and deciduous swamps during the leaf-off season [23,24]. Flooded vegetation was mapped by [23], whereas bulrush (Scirpus americanus, Scirpus lacustris, Scirpus fluviatilis, Eleocharis spp.), cattail (Typha), wild rice (Zizania), reed canary grass (Phalaris arundinacea), and swamps were mapped by [25]. Mapping these classes are important for many wetland ecologists, government departments, and private industry, and C-band satellites such as Radarsat-2, the Radarsat Constellation Mission (RCM), or Sentinel-1 would be recommended for this purpose. Nevertheless, C-band would not be the recommended approach for GLAM which needs to map more detailed classes, such as meadow marsh and mixed emergent and at a smaller MMU than that achieved with C-band SAR data now available.
L-band radar data, such as those provided using the Japanese ALOS PALSAR [26], can penetrate vegetation canopy better than X- or C-band data because of a longer wavelength. Bourgeau-Chavez et al. [27] used ALOS PALSAR data for classifying wetland ecosystem types which included emergent wetland, forested wetland, bog and fen, and wetland monocultures (Typha, Phragmites, and Schoenoplectus). Some of these same classes were identified by the GLAM as being important. They achieved an overall accuracy of 94% for the entire coastal Great Lakes basin, and a range between 86% and 96% for the individual Great Lakes. Of particular interest was the fact that multi-date L-HH PALSAR was found to be useful for mapping invasive Phragmites australis, which was taller than native species [28]. Considering the high accuracies given by [27], it is important to point out that the minimum mapping unit in their work was 0.2 hectares, or as much as 125 to 500 times larger than the area of the MMU being investigated for the GLAM. Similar high accuracies were noted in the use of multi-date PALSAR and Landsat data by [13] in Ethiopia. In the latter case, topographic data were found to be useful for improving the results, a factor noted in the GLAM requirement to include topographic information. Although the PALSAR data yield excellent results, they can be improved through the use of a learning classification system such as random forest [13,15]. These tools are further discussed under processing approaches.
There are three main issues with L-band PALSAR data when considering using this data for the requirements outlined by the GLAM Committee or by other wetland monitoring programs, which are the following: (1) the system is not a continuing program; (2) the data are expensive ($2100 for one scene); and (3) the products derived from interpretation of these data, at a nominal 10 m resolution, do not meet the stated minimum mapping unit for this assessment. Regarding the first issue, there are early indications [29] that Japan plans to launch a follow-on L-band system in the fiscal year 2020. Regarding the second issue, the data costs can be reduced by 40% for a volume order. Nevertheless, L-band SAR imagery is an excellent tool for applications with a larger MMU and more general wetland classes.
Another important area of wetland remote sensing is monitoring water level changes. As previously mentioned, meadow marsh is sensitive to changes in water level, thus, the GLAM could benefit from having a SAR method to measure small changes in water level. It is possible that if SAR data could detect small changes in water level over time, this could be used as an indicator of suitable habitat for meadow marsh. To detect changes in water level using SAR data, the interferometric synthetic aperture radar (InSAR) technique is used to determine coherence, which is a measure of mechanical stability for features on the ground [30]. With the launch of several SAR constellations, with higher exact revisit times, coherence should be able to be maintained throughout the growing season in marshes and swamps, which would result in more accurate monitoring of water levels. Research by Brisco et al. [31] demonstrated that water level changes of approximately 4 cm (or possibly better) could be detected using Radarsat-2 C-band in wetlands, where there was suitable coherence, which was often observed in marsh and swamp. While [31] did have promising results, more recent InSAR research by Chen et al. [32], which looked at coherence at Long Point, Ontario, Canada, showed results using Radarsat-2 and Sentinel-1 that were not as good. Coherence was only maintained at certain periods throughout the year and the results suggested InSAR measurements of water level were often underestimated. However, if one could obtain consistent data from the recently launched RCM, with higher exact revisit times than Radarsat-2 and Sentinel-1, one should be able to get much more consistent coherence and measure the water level changes more accurately. The key with C-band appears to be a more frequent revisit time, but more research is needed to verify this. L-band is preferred over C-band for InSAR applications because the longer wavelength results in greater penetration of the vegetation canopy, and therefore coherence is maintained over a shorter revisit period. The use of SAR data over a season could offer a solution to monitoring seasonal flooding of wetlands and the response of general wetland classes to controlled water levels in the Great Lake. If RCM or L-band data with high resolution could be acquired and water levels accurately measured, it could possibly be used as a surrogate for meadow marsh monitoring by the GLAM.
To date, research on the use of SAR has provided two somewhat different answers to the question at hand, i.e., “Is SAR data alone useful for monitoring changes in meadow marsh and the other five classes?” Research on the application of C-band SAR, of the sort that would be provided by the RCM, suggests that there could be some applications associated with some elements of wetland monitoring including delimiting water boundaries. However, at this stage, C-band and X-band SAR would appear to be of limited use to the GLAM. It is important to keep in mind that C- and X-band SAR data are often seen to be complimentary to optical data for shoreline mapping, among other wetland applications [33]. The work by Bourgeau-Chavez et al. [27] also saw a complementary role for L-band SAR and Landsat data to provide a repeatable process, albeit at 20–30 m resolution. The complementary role for C-band SAR is one that should be further researched, although it appears that the potential for L-band use in conjunction with other remote sensing data could lead to more immediate results, assuming that the L-band data are available in the future.
L-band SAR appears to have some potential for monitoring meadow marsh, albeit with a maximum resolution of 7–10 m (depending on polarization and looks) [26], not at the resolution or minimum mapping unit required for the application being investigated here. With its already demonstrated value in wetland monitoring [27], therefore, it is suggested that the GLAM or other stakeholders should encourage the governments involved to seek access to data from the follow-on system when it does come online in 2020 and that the tools be considered to be part of a package to monitor the Great Lakes basin wetlands as a whole. In addition, two other L-band SAR satellites with similar nominal resolutions to PALSAR are planned to be launched in the future. It is expected that the data from these satellites would be freely available, and therefore should be considered for a Great Lakes basin-wide wetland monitoring system.

3.2.3. Optical Satellite Data

Typically, optical satellite imagery includes data from those sensors that image in a panchromatic band and/or the blue, green, red, near-infrared, and occasionally the short-wave infrared bands such as those in Landsat-8 [34]. In turn, these are usually classified by the size of the pixel for which reflectance values are recorded. Here, we arbitrarily considered low to medium resolution imagery at 10–30 m, medium resolution imagery at 3–9 m, and high-resolution imagery at better than 3 m. In this study, the rationale for the resolution groupings used, was the specific MMU being investigated, i.e., 2 × 2 m and 4 × 4 m.

3.2.4. Low-Medium Resolution Data: 10–30 m

It is assumed that 10–30 m data, such as Landsat TM is not useful if used alone for monitoring change in the type of wetland vegetation of interest here, i.e., the six classes with an MMU of 4 m2 to 16 m2. At best it is believed that such imagery would only be useful for monitoring gross trends or if used in concert with other data.
A number of authors have concluded that Landsat data were often best combined with SAR data for monitoring wetlands. However, even in the best cases, the research reviewed did not lead to accuracies at the detail required for the GLAM assessment. With that stated, there have been studies that demonstrated the usefulness of combining Landsat with other relatively coarse data, or pan-sharpened optical data (when a panchromatic single band image was used to increase the spatial resolution of a multispectral image) mode with SAR data to carry out monitoring. For example, White et al. [21] used Landsat-8 data for segmentation combined with SAR data, and Franklin et al. [35] combined both Landsat-8 and Radarsat-2 data to classify the Hudson Bay Lowlands Ecoregion, where less detail is required as compared with the Great Lakes.
Multitemporal imagery and/or the fusion of optical and SAR have been used with some success for wetland mapping. For example, [27] mapped wetland classes as well as surrounding land uses for the entire Great Lakes using Landsat TM and PALSAR from spring, summer, and fall. On the basis of a comparison of PALSAR and Landsat, they concluded that that Landsat TM bands were better able to differentiate different upland cover types. Gallant et al. [36] also used Landsat data with SAR data to help reduce the amount of confusion between wetland features and upland grasslands. This research suggests that SAR data contributes important monitoring information in addition to optical data to help determine the status of wetlands over time. The authors concluded that they could reliably identify, over several years, the features of wetland vegetation stands that were as small as 30 m in width. In another example, Grenier et al. [37] used Landsat-7 and Radarsat-1 data to map five wetland classes (Bog, fen, swamp, marsh, and shallow water) with a MMU of 1 hectare. This is a far greater MMU than we are concerned with here, and five times that of [27]. Although the remote sensing data used in these examples were too coarse to map the detailed wetland classes of interest to the GLAM, this research showed that multitemporal and multisensor remote sensing improved the accuracy of wetland classification, and should be considered by operational programs regardless of their MMU.
Several authors have summarized the problems associated with using Landsat TM data for wetland monitoring. The authors of [36] noted that wetlands have proven to be difficult to map remotely while achieving both high accuracy and consistency due to water being very dynamic and overstory vegetation covering wetlands during certain periods of the growing season. In addition, the size of many wetlands was smaller than could be detected by most civilian satellite sensors, thus in some cases wetland pixels were classified as uplands. Amani et al. [20] noted that when they considered optical data, even when used with radar, the classification of wetlands using remote sensing was difficult because some wetland types were similar both spectrally and texturally. Klemas [38] concluded that Landsat TM was acceptable for monitoring changes in large watersheds, but not freshwater wetlands because they were small, patchy, and spectrally impure. Medium-resolution sensors were not able to discern patchy wetlands because there were lots of mixed pixels, which decreased mapping accuracy. Thus, if the objective is to map upstream, freshwater wetlands, high spatial resolution and potentially hyperspectral imagery is required. Our literature review clearly indicated that low-medium resolution optical imagery cannot be used by the GLAM Committee to map the six detailed wetland classes that they identified.
Although low-medium resolution optical data cannot map detailed wetland classes, there is one potential application for the GLAM to utilize low to medium resolution satellite data. Providing background to the idea was the approach by [13] who used wet and dry season Landsat data and derived vegetation and wetness indices. In all, 19 vegetation, soil, and water indices were derived from the Landsat-5 TM surface reflectance data. With these indices and other data, they mapped eight wetland types and three terrestrial/upland classes. It may be that somewhat more general data, when used either singly or with another dataset, could be converted to a specific index or set of indices that would indicate the presence of the type of change seen as important by the GLAM, albeit at a coarser MMU.

3.2.5. Medium Resolution Data: 3–9 m

The data that fall into the category of medium resolution are exemplified by SPOT satellite data [39] with a resolution in the order of 5 m. SPOT data have been used extensively in the study of wetlands, primarily for mapping. In their review, Ozesmi et al. [40] listed SPOT data along with Landsat as one of the most used systems in wetland mapping. It is not surprising that SPOT data, with higher spatial resolution as compared with Landsat, have been used in operational wetland mapping, for example, by Ducks Unlimited, as well as others [41]. A broad range of wetland classes have been derived from SPOT data, either from one image date during the height of the growing season or by using multiple dates. However, even with their broad use in wetland mapping, given the spatial resolution of this type of data, the application of SPOT imagery to the GLAM requirements, as outlined in the introduction, is regarded to be limited. However, the higher resolution data in this group, such as imagery from Planet’s Dove constellation [42] with resolution of 3 m, could warrant an evaluation for monitoring the changes of the six wetland classes of interest to the GLAM.

3.2.6. High-Resolution Data: <3 m

Imagery in this class ranges from 2.5 m panchromatic ALOS (now decommissioned) to 0.82 m IKONOS imagery, down to WorldView-4 at 0.31 m. One of the issues found with using high-resolution imagery was that it had greater sensitivity to within-class spectral variance, therefore, separating land cover types with spectrally mixed pixels was more challenging than with medium-resolution imagery [38]. Similar to the medium resolution data discussed in the previous subsection, a review of the literature application in wetlands showed only a few relevant examples of the data’s use. Work on IKONOS data has been cited in [43]. The IKONOS data were used with object-based image analysis (OBIA) but the results reported (76% accuracy) suggested that the data would not be useful for the application assessed here. The IKONOS data were also used in an assessment reported in [44]. They used 0.82 panchromatic and four 3.2 m multispectral bands (blue, green, red, and near infrared) IKONOS data to map seven wetland classes. Although the point of the exercise was to assess the contribution of LiDAR data-derived terrain derivatives on improving mapping accuracy, the result with only the IKONOS had an overall accuracy of 71.8% and was only marginally better with the terrain information. Given the low accuracy and larger MMU, these results also called into question the use of IKONOS data for the GLAM purposes. The sample ALOS optical imagery available to the authors was inconclusive as to whether or not the data could be useful for the GLAM, however, there could be some potential application for monitoring wetland changes, in applications that accept a larger MMU.
WorldView data (or similar data 0.5 m resolution or better) appear to be worth assessing for the GLAM’s six wetland classes, or at least as indicators of change. One study compared Worldview-2 and Landsat-8 in a coastal saltmarsh using maximum likelihood classifier (MLC), support vector machine (SVM), and artificial neural network (ANN). The wetland classes were Phragmites australis, Sporobolus virginicus, Ficiona nonosa, and Schoeloplectus sp., mangrove and two pine trees (Avecinia and Casuarina sp.). The smallest vegetation patches were approximately the size of the spatial resolution of WorldView-2 and others were larger than the spatial resolution of Landsat-8. The overall mapping accuracy results for the WorldView-2 imagery were 92.12% (SVM), 90.82% (ANN), and 90.55% (MLC) as compared with 82.04% (SVM), 77.31% (ANN), and 75.23% (MLC) for Landsat-8 [45]. These results suggest that it would be possible to achieve the MMU and mapping accuracy required by the GLAM with Worldview-2 data; this needs to be tested with the six GLAM vegetation classes. In addition, the cost of Worldview-2 data may not make this option practical for a continuous monitoring program.

3.2.7. Airborne Light Detection and Ranging data

Airborne LiDAR data are products of active sensors that produce an output much more complex to process (and understand) than the optical satellite data in the previous section. Similar to SAR, LiDARs rely on sending a pulse (of light instead of microwave) and measuring the return. However, since the pulses sent are in the green band (for bathymetric applications), near-infrared band (for topographic applications), or blue-green band for coastal areas in the newer sensors, LiDARs cannot penetrate cloud, but can be used at night. Furthermore, there are many complexities that are introduced by the type of vegetation, the presence of water, the pulse frequency, and other factors. These factors must be understood and kept in mind when using or planning to use LiDAR data, as has been well described in the literature [46,47,48,49,50,51]. As with SAR, there are scientists who tend to specialize in the processing and understanding of LiDAR data.
One of the factors of interest to the GLAM is vertical accuracy, because it can be used as a surrogate for determining the extent of the six wetland classes. A point cloud, which can be generated from LiDAR data, can be used to measure vegetation height, and therefore help to classify the different wetland vegetation. LiDAR has long been seen as a tool that can yield much more accurate vertical data than available digital elevation models (DEMs) provided by governments. The information available from suppliers suggests that vertical accuracy measurements acquired by LiDAR can now be mapped with accuracies in the order of 3–10 cm assuming certain parameters for the flight, type of vegetation, presence of water, and processing [52,53]. The rapid advances in LiDAR technology can be seen in the vertical accuracy of up to 3 cm vertical cited in 2015–2017 as compared with the accuracies cited of 10–15 cm in a 2011 literature review [54].
Research has demonstrated that vertical accuracy information provided by LiDAR, combined with other information such as aerial photographic interpretation has led to improved mapping of wetlands [48]. Bare earth LiDAR-derived DEMs have been proven to be useful in better understanding variations in treed and non-treed wetlands including, one might infer, meadow marsh environments. Accuracy was improved by 8% over aerial photography alone. However, the use of LiDAR data alone proved to be insufficient, at that time, to meet the accuracy requirements for the GLAM [48]. Improvements associated with the use of LiDAR in mapping in a low canopy environment on the Belgian coast were also demonstrated by [55]. Accuracy, in the fourteen-class map, went from 55% to 71% when fused with multispectral data from a digital camera. These accuracies did not meet the standards set for this assessment. Nevertheless, other uses for LiDAR that have been cited included differentiating invasive species such as Phragmites australis from low marsh plants [48], which could still be valuable information for the GLAM Committee, just not at the MMU or mapping accuracy they desire.
Although the early LiDAR systems were limited in their ability to contribute to wetland mapping except in a complementary fashion, more recent LiDAR system developments have seen multispectral LiDARs such as the Teledyne Optech’s Titan system which provided data from shallow water to the land, providing an integrated coastal mapping system [54]. Morsy [56] developed an automated approach for using such data to accurately identify the shoreline, something that was not easily done with previous LiDAR systems. He also provided an excellent review of the LiDAR literature on shoreline discrimination and the problems inherent in the use of this technology. Other relatively recent advances have seen LiDARs combined with hyperspectral scanners and aerial cameras in one integrated data collection system.
The integration of LiDAR with hyperspectral or an aerial camera in one package has led to a whole new potential approach to both wetland mapping and understanding. This was noted by [57], who provided a useful table which summarized common airborne LiDAR data types and derivatives that have been shown to be useful for wetland-related applications. This table described the data format or derivative and a description of the wetland-related application and they cited a reference for the material. They also noted that a more complex analysis of the LiDAR returns could yield further information. However, LiDARs fully integrated with hyperspectral or aerial camera systems have been reported to cost more [58]. In 2015, Madden et al. [50] noted that LiDARs could be used on unmanned aerial vehicles (UAV) platforms. The use of small robust LiDARs has been given a push by the need for laser-based systems for guidance in autonomous vehicles. As of 2018, there were 12 LiDAR sensors made for use on UAVs [59]. Most of the systems are so new that the accuracies stated do not appear to have been verified for the application being assessed by the GLAM.
In [57], they concluded that there were still several limiting conditions and uncertainties with LiDAR system settings and methods for the purpose of measuring vegetation characteristics [46], and these details were often not available in the literature or given by the data provider. Additionally, wetlands could be challenging for LiDAR because of its limited ability to penetrate dense vegetation and the presence of water may hinder the sensor’s capacity to measure pulses. LiDAR has had an obvious and well researched potential role to play in wetland monitoring by introducing more precise vertical accuracy information, which in turn could help to classify the six wetland classes identified by the GLAM. However, the future is somewhat less clear given the projected use of UAV-based LiDAR’s for vertical accuracy and one-stop integrated laser/camera or laser/hyperspectral systems. It is worth noting that Reif [60] was of the opinion that, although the accuracy could not be guaranteed, a LiDAR/hyperspectral dataset should meet the GLAM requirements.

3.2.8. Airborne Hyperspectral

Hyperspectral imaging, sometimes referred to as imaging spectroscopy, generates very fine (10 nm spectral range) data for up to several hundred spectral bands. The interest in hyperspectral data comes from the detailed spectral information one can obtain that can help to identify subtle differences that exist among different vegetation types found in complex wetland environments, such as the one of interest to the GLAM. Although aircraft has been the primary platform for obtaining hyperspectral information, there was a satellite system with long term plans to have other systems. The U.S. Hyperion satellite hyperspectral system, which was shut down in 2017 after 17 years of operation, had 220 unique spectral channels with 30 m spatial resolution and a 7.5 km swath. Hyperion data was reported by several authors cited by Guo et al. [12] to achieve good results consistent with the spatial resolution of the system, which was significantly less detailed than what the GLAM called for. However, mapping results with Hyperion were better for wetland mapping than the results obtained with Quickbird. This may be yet another indication that better spectral data could make up for poorer spatial resolution, and that future hyperspectral satellite or aircraft missions would be able to meet the needs of the GLAM.
Several hyperspectral airborne sensors are available. Itres Research Limited of Canada has been producing their CASI systems for over thirty years. The CASI system has been selected by Teledyne Optech for integration into their LiDAR system. A considerable amount of work has been done on coastal wetlands with these systems include much of the UK coast, large areas of British Columbia, some in the Great Lakes and elsewhere. Other hyperspectral systems include the U.S. AVIRIS research system, the Australian Hymap System, and Finland’s AISA.
Klemas [61] noted that, when LiDAR, hyperspectral, and radar data with narrow-band vegetation indices were used in combination, some wetland species were able to be separated and researchers were able to improve biochemical and biophysical parameter wetland vegetation estimations, for example, water content, biomass, and leaf area index. Airborne LiDARs have also been applied with hyperspectral imagers to map wetlands, beaches, coral reefs, and submerged aquatic vegetation. Guo et al. [12] cited several authors who successfully mapped invasive species of different types and changes in vegetation with accuracies that ranged from poor to 90%. In addition, Kalacska et al. [62] used CASI data to examine foliar chlorophyll and nitrogen content in 19 species in the Mer Bleu wetland near Ottawa.
The research to date, and studying the hyperspectral imagery available, seems to suggest that with fine enough spatial resolution, airborne hyperspectral data would be able to monitor the six vegetation classes and changes of interest to the GLAM. One interesting observation made by [38] was that aerial hyperspectral image analysis was too complex for standard National Estuarine Research Reserve System (NERRS) staff and too costly for large NERRS sites or complete watersheds, and therefore this should be taken into consideration by the GLAM or others planning to develop a detailed wetland monitoring program. It is not clear what vertical accuracy information could be obtained with hyperspectral sensors.

3.2.9. Aerial Photography (by Airplane)

Relatively low altitude orthorectified colour-infrared (CIR) or false colour (bands displayed as near-infrared, red, and green to better visualize vegetation) aerial photography with a resolution of 0.3 m has been widely used to characterize most of the wetland classes of interest to the GLAM and recent changes in them [63,64]. However, since colour-infrared imagery cannot penetrate water, sub-aquatic vegetation cannot be identified or mapped. The optimal time of year appears to be mid-summer. It has been noted that stereo imagery and elevation information add to the ease of extracting required information, and it is expected that field work is coincident with the image acquisition. Other researchers working in the Great Lakes Basin have effectively used visual interpretation of aerial photography [14].
Historical changes in meadow marsh have been monitored over a period of decades [7] using a variety of aerial photography, including panchromatic imagery from the 1950s, as well as more recent digital aerial photography. Scales have ranged from 1:4800 to 1:40,000 [65]. Ground data sampling has been done along randomly selected transects and a topographic cross-section has been surveyed using a laser transit. In [63], the 16 wetlands included in the Wilcox et al. [7] study were re-evaluated using aerial photointerpretation, which could be used as a new baseline for subsequent studies. Howard et al. [64] also developed an approach for using aerial photography to map meadow grass changes in the Great Lakes basin. These approaches both appeared to be consistent with how photographic interpretation should be carried out, as described in some detail in the Manual of Photographic Interpretation (various chapters on the process of interpretation, especially [66].
Clearly, colour-infrared airborne imagery appears to meet the GLAM requirements except for submerged aquatic vegetation, as it was this imagery that was used to establish the baseline referred to in [63]. The precision of vertical accuracy information that is obtainable depends on the flying height, type of camera, and overlap of the imagery obtained.

3.2.10. Aerial Photography (by UAV or Drone)

Ten years ago, imagery from an unmanned aerial vehicle or UAV would not have been considered for the application being assessed here. As recent as five years ago, the technology would not have been able to produce the vertical accuracy demanded, or the quality of colour-infrared imagery. Today things have changed.
According to product specifications, the more advanced UAVs such as Trimble’s UX5 HP [67,68] or the eBee real-time kinematic (RTK) [69] can produce colour-infrared imagery, vertical information with an accuracy of a few cm, and be preprogrammed to cover a specific area, even if odd shaped. They can cover up to 80 hectares in one flight of 50 min duration with 2 cm pixels. According to the product sheets, nine of the sixteen test sites identified by [64] could each be covered in one flight of 50 min. Three more test sites would require two flights, while two more test sites would require three flights, and the two largest test sites would require five flights. A supplier of UAV mapping reported that the more advanced systems offered vertical accuracies of a few cm, however, they tended to be slower, covered less area on one flight, and the 10 cm data from the less accurate system was found to be sufficient for their engineering purposes.
An added advantage to the use of UAVs is that fieldwork can easily be conducted at precisely the same time as the flights, the UAV operator and field worker can be the same person, and, in the worst case, can travel to the site in the same vehicle.
Aerial photography by an RTK compatible drone with a colour-infrared camera appears to be capable of delivering the imagery with the positional detail necessary to track changes in marsh meadow wetlands (with the proviso that submerged aquatic vegetation could not be mapped), as well as to provide sufficiently accurate elevation details. Thus, we recommend UAV imagery with RTK as a practical, affordable method to map and monitor five of the GLAM wetland classes.

3.3. Processing Approaches

Obtaining the imagery to be used for extracting the required information is the first, albeit important, step. As mentioned elsewhere, field data collection is virtually always required and must be coordinated with the remote sensing data acquisition. The level of detail and type of field data required varies with the processing approach and data types used. An interesting summary of the tools that have been used in remote sensing of wetlands was given in [70] (Figure 3, pg. 6400) and included applications of classification approaches by general method type and publication year. In recent years, the number of papers using maximum likelihood and supervised classification has diminished, whereas machine learning and object-based image analysis have increased.
Depending upon the processing approach decided upon, there are preprocessing, merging of datasets using a common spatial reference, and, obviously, the processing approach that dictate the supporting hardware, software, and skill sets required of those extracting the information. The remainder of this section briefly discusses the pros and cons of the various approaches in general terms. Much more detail than what is given here would be needed to plan and apply any of the approaches.

3.3.1. Visual Interpretation

The interpretation process depends on having appropriately trained people to do the interpretation. In this case, one would assume that these are wetland specialists. Given that monitoring changes in an existing baseline set of information are required, it may be possible to have a set of dichotomous keys or guides that a less experienced individual could then apply. Visual interpretation by experts tends to be able to identify subtle changes, in more detail, and with less error than machine-based methods. Simply stated, to date, machine-based methods cannot take in all of the factors that the human brain does when interpreting an image [71]. In addition to colour, the human interpreter uses the shape of the feature, context (what is around or near the feature), pattern (man-made or natural), and texture (smooth or rough). However, visual interpretation approaches are often criticized for taking more time for large areas, being less rigorous, and less repeatable. Given that a relatively small number of sample plots are assumed to be the targets of the GLAM assessment and given the past success of visual interpretation, visual interpretation is recommended for aerial photographs to map and monitor the extent of the six wetland classes.

3.3.2. Image Processing Pixel Classifiers

Image processing pixel classifiers have been in routine use since the early to mid-1970s. Although many papers have been published using these tools for wetland mapping [12], the reported results have not typically been at the level required here. The two types of pixel classifiers that are usually referred to are supervised and unsupervised image analysis. They are based on the assumption that different objects (in this case wetland vegetation) have unique spectral “signatures.” Using this approach, there are various methods to identify changes from one date to another in imagery of the same type. This assumption about uniqueness is especially problematic with the highly variable nature of wetlands as compared with, for example, agricultural crops. Success depends on several factors, including the representativeness of the training samples used for supervised classifiers and how experienced the analyst is in assembling results of the analysis for unsupervised approaches. One of the factors to keep in mind is that errors are cumulative. If there is a 90% error in the first classification and a 90% error in the second, when they are overlaid, the expected combined accuracy over time is not 90%, but rather closer to 81%.
With higher resolution data, the appearance of objects in the imagery become more complex and there is less homogeneity. With the increased resolution, more information is also available such as patterns, shapes, and textures. Neither of these approaches makes use of these important attributes that make visual interpretation so powerful. Furthermore, these methods cannot be used to bring together imagery of different types, such as LiDAR and hyperspectral or SAR and high-resolution satellite data. Therefore, it is doubtful if these methods could play any role in the monitoring of the GLAM wetland classes, although they may play a future role in mapping of large areas.

3.3.3. Object-Based Image Analysis (OBIA)

Object-based classification is a relatively new approach to image analysis which has been developed for high spatial resolution images. Applications to wetlands started appearing in the literature with some regularity in the early to mid-2000s [70]. OBIA can integrate multisource remote sensing data or remote sensing and GIS data. The principle of object-based classification is to group objects (groups of pixels or data points tied to a specific location) that have similar features, such as a similar pixel shape, colour, pattern, and texture, and classify them based on the object features [12]. In other words, it uses some of the same interpretation parameters that are so valuable in visual image interpretation.
Some researchers have used OBIA to bring together data from different sensors, something that is not possible with pixel classifiers. Grenier et al. [37] merged data from Landsat ETM data and Radarsat-1, while Tiner [11] used aerial (Canadian Digital Surface Model), SAR (Radarsat-2), and optical data (RapidEye and Landsat-8). Mahdavi et al. [72], citing other authors, contended that the result of object-based classification was more meaningful to ecologists as compared with pixel-based classification. They also suggested that OBIA could help to process large amounts of multisensor data and could also be used in combination with supplementary datasets in a simple manner. Guo et al. [12] cited a number of uses of OBIA, as well as some modifications to the approach. Dronova [70] reviewed 73 papers on OBIA applied to wetlands that were published up to 31 December 2014. The process to arrive at wetland classes using OBIA was detailed by several authors cited previously, including [20] and [43]. Thus, it is clear that OBIA has a role to play in wetland mapping.
Although OBIA results have often been considered to be more accurate than pixel classifiers, as much as 31% more accurate in one paper cited by Dronova [70], the author maintained that there were still a number of concerns. Similar to many engaged in applying remote sensing to wetlands, Dronova [70] mentioned the complexity of the surface of wetlands. Improvements in applications, delineation, and classification accuracies were hindered by a variety of factors including, wetland surface complexity, dynamics, and by the shortcomings in OBIA implementation. Dronova concluded that OBIA was useful for alleviating local spatial heterogeneity of wetlands, i.e., as a smart filter to remove noise and work with fine scales of data, and that OBIA could map isolated wetlands. Dronova [70] contended that a number of wetland-specific challenges to remote sensing-based landscape inference remained important concerns in OBIA, despite its ability to alleviate local surface heterogeneity and reduce “salt-and-pepper” speckle. Spectral similarity of diverse classes due to homogenizing effects of moisture or dead vegetation signals can reduce classification accuracy and the effectiveness of class discrimination. Further details are not included here, because it is not expected that the process would be used in mapping meadow marsh change or some of the other GLAM wetland classes, which this assessment is directed at. Nonetheless, OBIA has a role to play for wetland mapping of certain classes at a coarser MMU than that required by the GLAM.

3.3.4. Machine Learning Analysis Multisensor Systems

Most of the more traditional image analysis approaches assume that the data follow a normal curve. Learning analysis systems such as random forest make no such assumptions; they can work with non-parametric data and data of many types. The random forest algorithm is similar to a decision tree algorithm, although it is constructed based on a series of trees. Random forest and other learning systems have been used by a number of authors who have looked at the complexity of wetlands with multiple datasets, not all of which are image data [9,13,14,15]. A variety of open source random forest classifiers are available including imageRF, which does not require a platform or license and uses general file formats. It can be used as an add-on in the freely available EnMAP-Box or in the commercial software IDL/ENVI [73]. A key point, when using such learning classifiers, is to ensure that the proper approach has been followed so that the important values are both stable and meaningful, otherwise classifications would be inconsistent and results unreliable. Specifically, it is necessary to run random forests more than once and the variability of the values must be examined, as described in [74].
Mahdianpari et al. [19] summarized the benefits of the random forest algorithm as follows:
  • It is less affected by outliers and noisier datasets;
  • It has a great capability to deal with a high dimensional, multisource dataset;
  • It represents a higher classification accuracy as compared with other well-known classifiers, such as support vector machines (SVM) and maximum likelihood;
  • It assesses the variable importance of input features; and
  • It is an easy to handle classifier, since only two input parameters need to be determined by the user, i.e., the number of trees and the number of split variables.
It would certainly appear that for mapping large areas of complex wetland environments, the learning systems approach in general, and random forests in particular, offer some potential. Using both the multitemporal, multisensor approach with random forest could potentially map most of the classes of interest to the GLAM with high accuracy, but most likely not at the required MMU. This wetland mapping and monitoring approach would be highly recommended for stakeholders who are interested in more general classes with more flexibility in the accuracy and MMU.
One of the issues in this assessment is the potential future need to map subtle changes in meadow marsh across a larger area than sample test sites. Although it is clear that the way to do this is with aerial photography, the cost would be enormous. The question posed elsewhere in this report was, “Is there a surrogate using, for example, high-resolution satellite data to track changes in meadow marshes over a wider area?” The Wikipedia definition of learning systems states that learning systems can be supervised. A supervised learning algorithm analyzes the training data and produces an inferred function, which can be used for mapping new examples. Is it possible that a supervised learning system could be developed to identify a surrogate for meadow marsh change using, for example, some combination of high-resolution satellite data and other available products or datasets? This question is certainly worth being investigated by researchers in the future.

4. Key Findings: Systems and Sensors to Monitor the GLAM Wetland Classes

Anyone considering a wetland monitoring program using remote sensing must consider the following: (1) the classes that are required, (2) the accuracy that is required, (3) the MMU, (4) the practicality to conduct, (5) the ease with which it can be repeated, (6) the availability of the remote sensing data, (7) the availability of the skills and expertise to interpret and process the data, and (8) the costs. With these factors in mind, Table 3 summarizes how the literature review meets the requirements specified by the GLAM. The scan was organized based on the sensors and platforms and processing approaches reviewed, which are listed in Column 1. The second column indicates the expected success with which the main classes could be mapped with the tool listed at the minimum mapping unit specified. The third column identifies the success with which meadow marsh changes could be identified.
Success expected for each minimum mapping unit (MMU) is estimated in the following three general ranges: high accuracy (estimated over 90%), limited (some useful information about the factor may be available), and nil (meaning that the tool will be unlikely to deliver the required accuracy given the MMU). In some cases, two of the tools must be used together to achieve success and where this is the case that is indicated.
The “z value” column indicates the vertical accuracy that can be determined and if the accuracy is below 10–15 cm than nil is given as the response. In some cases, the vertical accuracy depends on the time of year, sophistication of the system, flying height, and other factors. The potential surrogate column provides an indication as to whether or not the sensor, platform, or processing approach would be able to deliver a surrogate for the specified GLAM classes or changes in them. In most cases a “yes?” indicates that the author believes that such a surrogate could be determined.
Cost is a very general estimate. Commercial costs are difficult to obtain, although some costs have been determined for the technologies that look most promising. With respect to costs, it should also be noted that the Government of Canada has supply arrangements which one can use to calculate the expected costs. The literature column provides an indication as to whether or not there is literature that supports the prognosis for success. Difficulties and comments offer additional relevant information.
Our literature review has shown that a multitemporal, multisensor machine learning approach could map more general wetland classes, and potentially map the GLAM classes with the exception of submerged aquatic vegetation, just not at the accuracy and MMU required by the GLAM. However, a more general monitoring approach could be applied on a more frequent basis to map long term coarse trends, coupled with a less frequent, more detailed approach, for example, every five years. The SAR data, either C-band or preferably L-band, coupled with medium to high-resolution optical and classified using random forests, would be the recommended approach for the more general wetland monitoring. However, for the GLAM classes, high-resolution optical imagery (~0.5 m) would be required, which could make this option unaffordable. We would also recommend investigating InSAR, and LiDAR to identify potential surrogates for meadow marsh and the other wetland classes. By measuring water level changes and having detailed ranges of elevation, the GLAM would be able to determine the suitable habitat for these classes and monitor change over time based on changes in water level and elevation.
To map the GLAM wetland classes at their desired accuracy and MMU, we recommend airborne hyperspectral preferably with LiDAR, CIR airborne imagery, or a drone with RTK. However, none of these approaches would be able to map submerged aquatic vegetation. To classify the data, visual interpretation or machine learning are the suggested techniques. The airborne hyperspectral and CIR airborne imagery are both expensive, and therefore may be eliminated as a possibility. When the areas of interest are small, the use of a drone could be both practical, affordable, and repeatable. Advanced airborne “coastal” LiDAR with either a multispectral or hyperspectral sensor was the only approach that was able to map submerged aquatic vegetation, as well as the other five GLAM wetland classes. Unfortunately, the high cost to acquire and analyze this data would likely make it unaffordable as a wetland monitoring tool.

5. Lessons Learned

A number of lessons learned came out of this study that could be useful for others conducting such an assessment:
  • It is important to focus attention on what truly matters in a review such as this. There is an amazing amount of research published on wetlands remote sensing, over 5500 papers by one reviewer’s account. Jumping into such a sea of information without a clear target would have been disastrous.
  • This review began with a series of discussions on what the data needs were that were to be addressed. Such a discussion is time consuming, but it leads to a better understanding of the problem and, consequently, a much clearer assessment.
  • The number of excellent and yet practical researchers associated with wetlands remote sensing research who are working together in the Great Lakes Basin is a valuable cross-border resource that could be better exploited.
  • Taking advantage of recent technology developments, data sharing has become important in other areas and other applications.
  • Data repositories and collaboration can save money and broaden the use and usefulness of data.
  • The book edited by Tiner [11] is a valuable and accessible resource, although there is scant material on high-resolution satellite data and little mention of thermal data.
  • The use of higher resolution optical data to “sharpen” lower resolution data (even from other sensor types) can lead to deriving better information from remotely sensed data.
  • Multitemporal Landsat and SPOT data can be used to map land use in areas surrounding wetlands.
  • A major consideration in determining what information remote sensing can and cannot provide about wetlands is the chosen MMU and classification system used.
  • Research seems to indicate that better spectral resolution can lead to what seems to be better spatial resolution.
  • The user community should be aware that because of speckle, spatial resolution of SAR data does not equate to spatial resolution of optical data. Some suggest that the effective spatial resolution could be one-third of the stated spatial resolution.
  • Radar data can generate vertical accuracy data that may be very useful in wetland studies and far more precise than those who are unfamiliar with the data realize.

6. Conclusions

In this paper, our results require a number of suppositions and assumptions. These included making the assumption that the research results could be translated into operational applications, the remote sensing data was available, high quality ground data was collected at the same time as the remote sensing imagery was acquired, and the analysis and interpretation was being performed by wetland experts. We made these assumptions and suppositions only for research in which results and explanations (and the experience of the authors) provided strong support. Our goal was to err on the side of caution. We assumed that the results must lead to classification of the six GLAM wetland classes, an accuracy of 90%, a minimum mapping unit of 2 × 2 m or 4 × 4 m, and vertical accuracy or “z” value of 2 m, in test sites in the Great Lakes Basin.
Most of the remote sensing sensors and platforms evaluated were not able to meet the 90% accuracy or MMU required for the GLAM. Nevertheless, many can monitor the changes in wetlands at a broader scale (more general wetland classes, lower mapping accuracy, and a larger MMU). We recommend a multitemporal, multisensor machine learning approach to monitor changes in wetland classes on a more frequent basis, and to capture more general wetland trends over a longer time period. Additionally, we suggest investigating the use of InSAR to monitor water level changes and LiDAR or a drone with RTK to measure vertical accuracy, both of which could be used as surrogates to identify areas where the GLAM vegetation classes exist, and to monitor for potential changes.
The imagery types and resolutions that appear to at least partially meet the GLAM stated mapping accuracy and MMU, and which focus on monitoring changes in floating or submerged aquatic vegetation, mixed emergent, miscellaneous mixed emergent, typha, meadow marsh, transition to uplands, are the following with the caveats noted:
  • Advanced airborne “coastal” LiDAR with either a multispectral or hyperspectral sensor which would provide seamless data from uplands into the water, including submerged aquatic vegetation.
  • Colour-infrared aerial photography (airplane) with (optimum) 8 cm resolution. The “z” or vertical accuracy obtainable was not determined and CIR cannot be used to map submerged aquatic vegetation.
  • Colour-infrared UAV photography with vertical accuracy determination rated at 10 cm at a cost of $35,000 to $40,000 for 16 test sites. CIR cannot be used to map submerged aquatic vegetation.
  • Colour-infrared UAV photography with high vertical accuracy determination rated at 3–5 cm but at a considerably higher cost than item 3 in this list. CIR cannot be used to map submerged aquatic vegetation.
  • Airborne hyperspectral imagery which provides limited to no vertical accuracy.
  • Very high-resolution optical satellite data with better than 1 m resolution could provide the information about meadow marsh but does not provide vertical accuracy.
Of the six approaches that could potentially meet the GLAM requirements, five of them are expected to have a high cost, thus, potentially rendering them unaffordable for a wetland monitoring program. The use of colour-infrared UAV imagery with RTK is the most cost-effective solution, however, it cannot map submerged aquatic vegetation. Advanced airborne “coastal” LiDAR is the only remote sensing platform that is able to map all six GLAM vegetation classes. It is possible that by combining wetland monitoring at a coarser scale on a frequent basis and using changes in water level and elevation as surrogates combined with more detailed mapping and monitoring using one of the six approaches above, the GLAM wetland classes could adequately be monitored.

Author Contributions

Funding for this project was acquired by J.D. and J.P.; L.W. and R.A.R. conducted the personal communication with wetland experts, the literature review, and wrote the first draft of the paper. All authors contributed to the design of this research and the final version of the paper. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the International Joint Commission.

Acknowledgments

The authors would like to thank Janette Anderson, Frank Ahern, Laura Bourgeau-Chavez, Brian Brisco, Colin Brooks, David Fay, Mike Hall, Adam Hogg, Tim Howard, Brian Huberty, Edric Keighan, Brandon Krumwiede, David Laflamme, Brigitte Leblon, Johanna Linnartz, Koreen Millard, Molly Reif, Bahram Salehi, Mike Shantz, Ridha Touzi, and Doug Wilcox for their time and the valuable information they provided on the historic and current status of wetlands from a biological and remote sensing mapping perspective.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wetland Meadow Marsh Community—Surface Area, Supply-Based (Lake Ontario & Thousand Islands). Available online: http://www.losl.org/twg/pi/pi_meadowmarsh-e.html (accessed on 24 July 2020).
  2. Coastal Wetlands. Available online: http://www.bpba.ca/bpcsp/uploads/CH3Coast140518.pdf (accessed on 24 July 2020).
  3. Chadde, S. A Great Lakes Wetland Flora: A Complete, Illustrated Guide to the Aquatic and Wetland Plants of the Upper Midwest, 4th ed.; Pocket Flora Press: Calumet, MI, USA, 2012; pp. 1–631. [Google Scholar]
  4. Great Lakes Marsh. Available online: https://mnfi.anr.msu.edu/communities/description/10671/Great-Lakes-Marsh (accessed on 24 July 2020).
  5. Grabas, G.P.; Rokitnicki-Wojcik, D. Characterizing daily water-level fluctuation intensity and water quality relationships with plant communities in Lake Ontario coastal wetlands. J. Great Lakes Res. 2015, 41, 136–144. [Google Scholar] [CrossRef]
  6. Frieswyk, C.B.; Zedler, J.B. Vegetation change in great lakes coastal wetlands: Deviation from the historical cycle. J. Great Lakes Res. 2007, 33, 366–380. [Google Scholar] [CrossRef]
  7. Wilcox, D.A.; Kowalski, K.P.; Hoare, H.L.; Carlson, M.L.; Morgan, H.N. Cattail Invasion of Sedge/Grass Meadows in Lake Ontario: Photointerpretation Analysis of Sixteen Wetlands over Five Decades. J. Great Lakes Res. 2008, 34, 301–323. [Google Scholar] [CrossRef]
  8. Lishawa, S.L.; Albert, D.A.; Tuchman, N.C. Water level decline promotes Typha × glauca establishment and vegetation change in Great Lakes coastal wetlands. Wetlands 2010, 30, 1085–1096. [Google Scholar] [CrossRef]
  9. Mixed Emergent Marsh. Available online: https://www.dnr.state.mn.us/rys/pg/mixedmarsh.html (accessed on 24 July 2020).
  10. Shantz, M.; (Environment and Climate Change Canada, Burlington, ON, Canada). Personal communication, 2018.
  11. Tiner, R.W. Remote Sensing of Wetlands: Applications and Advances; CRC Press: Boca Raton, FL, USA, 2015; pp. 19–42. [Google Scholar]
  12. Guo, M.; Li, J.; Sheng, C.; Xu, J.; Wu, L. A Review of Wetland Remote Sensing. Sensors 2017, 17, 777. [Google Scholar] [CrossRef] [Green Version]
  13. Dubeau, P.; King, D.J.; Unbushe, D.G.; Rebelo, L.-M. Mapping the Dabus Wetlands, Ethiopia, Using Random Forest Classification of Landsat, PALSAR and Topographic Data. Remote Sens. 2017, 9, 1056. [Google Scholar] [CrossRef] [Green Version]
  14. Hogg, A.; Beckerson, P.; Strobl, S. Developing Mapping and Evaluation Methods for Wetland Conservation in Central Ontario; Ontario Ministry of Natural Resources: Peterborough, ON, Canada, 2002. [Google Scholar]
  15. LaRoque, A.; Leblon, B.; Woodward, R.; Mordini, M.; Bourgeau-Chavez, L.; Landon, A.; French, N.; McCarty, J.; Huntington, T.; Camill, P. Use of Radarsat-2 and Alos-Palsar Sar Images for Wetland Mapping in New Brunswick. In Proceedings of the 2014 IEEE International Symposium on Geoscience and Remote Sensing, Quebec City, QC, Canada, 13–18 July 2014. [Google Scholar]
  16. White, L.; Brisco, B.; Dabboor, M.; Schmitt, A.; Pratt, A. A Collection of SAR Methodologies for Monitoring Wetlands. Remote Sens. 2015, 7, 7615–7645. [Google Scholar] [CrossRef] [Green Version]
  17. Campbell, F.H.A.; Ryerson, R.A.; Brown, R.J. GlobeSAR: A Canadian Radar Remote Sensing Program. Geocarto Int. 1995, 10, 3–11. [Google Scholar] [CrossRef]
  18. Henderson, F.M.; Lewis, A.J. Principles and Applications of Imaging Radar—Manual of Remote Sensing; Wiley: Hoboken, NJ, USA, 1998. [Google Scholar]
  19. Mahdianpari, M.; Salehi, B.; Mohammadimanesh, F.; Motagh, M. Random forest wetland classification using ALOS-2 L-band, RADARSAT-2 C-band, and TerraSAR-X imagery. ISPRS J. Photogramm. Remote Sens. 2017, 130. [Google Scholar] [CrossRef]
  20. Amani, M.; Salehi, B.; Mahdavi, S.; Granger, J.; Brisco, B. Wetland classification in Newfoundland and Labrador using multi-source SAR and optical data integration. GISci. Remote Sens. 2017, 54, 779–796. [Google Scholar] [CrossRef]
  21. White, L.; Millard, K.; Banks, S.; Richardson, M.; Pasher, J.; Duffe, J. Moving to the RADARSAT Constellation Mission: Comparing Synthesized Compact Polarimetry and Dual Polarimetry Data with Fully Polarimetric RADARSAT-2 Data for Image Classification of Peatlands. Remote Sens. 2017, 9, 573. [Google Scholar] [CrossRef] [Green Version]
  22. Brisco, B.; Kapfer, M.; Hirose, T.; Tedford, B.; Liu, J. Evaluation of C-band polarization diversity and polarimetry for wetland mapping. Can. J. Remote Sens. 2011, 37, 82–92. [Google Scholar] [CrossRef]
  23. White, L.; Brisco, B.; Pregitzer, M.; Tedford, B.; Boychuk, L. RADARSAT-2 beam mode selection for surface water and flooded vegetation mapping. Can. J. Remote Sens. 2014, 40, 135–151. [Google Scholar]
  24. Behnamian, A.; Banks, S.; White, L.; Brisco, B.; Millard, K.; Pasher, J.; Chen, Z.; Duffe, J.; Bourgeau-Chavez, L.; Battaglia, M. Semi-Automated Surface Water Detection with Synthetic Aperture Radar Data: A Wetland Case Study. Remote Sens. 2017, 9, 1209. [Google Scholar] [CrossRef] [Green Version]
  25. Gosselin, G.; Touzi, R.; Cavayas, F. Polarimetric Radarsat-2 wetland classification using the Touzi decomposition: Case of the Lac Saint-Pierre Ramsar wetland. Can. J. Remote Sens. 2014, 39, 491–506. [Google Scholar] [CrossRef]
  26. About ALOS/PALSAR. Available online: http://www.eorc.jaxa.jp/ALOS/en/about/palsar.htm (accessed on 26 February 2018).
  27. Bourgeau-Chavez, L.; Endres, S.; Battaglia, M.; Miller, M.E.; Banda, E.; Laubach, Z.; Higman, P.; Chow-Fraser, P.; Marcaccio, J. Development of a Bi-National Great Lakes Coastal Wetland and Land Use Map Using Three-Season PALSAR and Landsat Imagery. Remote Sens. 2015, 7, 8655–8682. [Google Scholar] [CrossRef] [Green Version]
  28. Bourgeau-Chavez, L.L.; Riordan, K.; Powell, R.B.; Miller, N.; Nowels, M. Improving Wetland Characterization with Multi-Sensor, Multi-Temporal SAR and Optical/Infrared Data Fusion. In Advances in Geoscience and Remote Sensing; Jedlovec, G., Ed.; InTech: Vukovar, Croatia, 2009; pp. 679–708. [Google Scholar]
  29. Motohka, T.; Kankaku, Y.; Suzuki, S. Advanced Land Observing Satellite-2 (ALOS-2) and its follow-on L-band SAR mission. In Proceedings of the 2017 IEEE Radar Conference (RadarConf), Seattle, WA, USA, 8–12 May 2017; pp. 953–956. [Google Scholar]
  30. Ferretti, A.; Monti Guarnieri, A.; Prati, C.; Rocca, F.; Massonnet, D. InSAR Principles: Guidelines for SAR Interferometry Processing and Interpretation; TM-19; ESA Publications: Noordwijk, The Netherlands, 2007. [Google Scholar]
  31. Brisco, B.; Murnaghan, K.; Wdowinski, S.; Hong, S.H. Evaluation of RADARSAT-2 acquisition modes for wetland monitoring applications. Can. J. Remote Sens. 2015, 41, 431–439. [Google Scholar] [CrossRef]
  32. Chen, Z.; White, L.; Banks, S.; Behnamian, A.; Montpetit, B.; Pasher, J.; Duffe, J.; Bernard, D. Characterizing marsh wetlands in the Great Lakes Basin with C-band InSAR observations. Remote Sens. Environ. 2020, 242, 111750. [Google Scholar] [CrossRef]
  33. Banks, S.; Millard, K.; Behnamian, A.; White, L.; Ullmann, T.; Charbonneau, F.; Chen, Z.; Wang, H.; Pasher, J.; Duffe, J. Contributions of Actual and Simulated Satellite SAR Data for Substrate Type Differentiation and Shoreline Mapping in the Canadian Arctic. Remote Sens. 2017, 9, 1206. [Google Scholar] [CrossRef] [Green Version]
  34. Landsat Science Web Page. Available online: https://landsat.gsfc.nasa.gov/landsat-8/landsat-8-bands/ (accessed on 26 February 2018).
  35. Franklin, S.E.; Skeries, E.M.; Stefanuk, M.A.; Ahmed, O.S. Wetland classification using Radarsat-2 SAR quad-polarization and Landsat-8 OLI spectral response data: A case study in the Hudson Bay Lowlands Ecoregion. Int. J. Remote Sens. 2018, 39, 1615–1627. [Google Scholar] [CrossRef]
  36. Gallant, A.L.; Kaya, S.G.; White, L.; Brisco, B.; Roth, M.F.; Sadinski, W.; Rover, J. Detecting Emergence, Growth, and Senescence of Wetland Vegetation with Polarimetric Synthetic Aperture Radar (SAR) Data. Water 2014, 6, 694–722. [Google Scholar] [CrossRef] [Green Version]
  37. Grenier, M.; Demers, A.-M.; Labrecque, S.; Benoit, M.; Fournier, R.A.; Drolet, B. An object-based method to map wetland using RADARSAT-1 and Landsat ETM images: Test case on two sites in Quebec, Canada. Can. J. Remote Sens. 2007, 33, S28–S45. [Google Scholar] [CrossRef]
  38. Klemas, V. Remote sensing techniques for studying coastal ecosystems: An overview. J. Coast. Res. 2011, 27, 2–17. [Google Scholar]
  39. Spot-6 Satellite Sensor. Available online: https://www.satimagingcorp.com/satellite-sensors/spot-6/ (accessed on 26 February 2018).
  40. Ozesmi, S.; Bauer, M. Satellite remote sensing of wetlands. Wetl. Ecol. Manag. 2002, 10, 381–402. [Google Scholar] [CrossRef]
  41. Alberta Merged Wetland Inventory. Available online: https://maps.alberta.ca/genesis/rest/services/Alberta_Merged_Wetland_Inventory/Latest/MapServer/ (accessed on 26 February 2018).
  42. Dove Constellation Sensor. Available online: https://www.satimagingcorp.com/satellite-sensors/other-satellite-sensors/dove-3m/ (accessed on 26 February 2018).
  43. Knight, J.F.; Corcoran, J.M.; Rampi, L.P.; Pelletier, K.C. Theory and applications of object-based image analysis and emerging methods in wetland mapping. In Remote Sensing of Wetlands: Applications and Advances; CRC Press: Boca Raton, FL, USA, 2015; pp. 175–194. [Google Scholar]
  44. Difebo, A.; Richardson, M.; Price, J. Fusion of Mulispectral Imagery and LiDAR Digital Terrain Derivatives for Ecosystem Mapping and Morphological Characterization of a Northern Peatland Complex. In Remote Sensing of Wetlands: Applications and Advances; CRC Press: Boca Raton, FL, USA, 2015; pp. 399–412. [Google Scholar]
  45. Rasel, S.M.; Chang, H.C.; Diti, I.J.; Ralph, T.; Saintilan, N. Comparative Analysis of Worldview-2 and Landsat 8 for Coastal Saltmarsh Mapping Accuracy Assessment, Proceedings of SPIE 9864 Sensing for Agriculture and Food Quality Safety VIII, Baltimore, MD, USA, 26 May 2016; Kim, M.S., Chao, K., Chin, B.A., Eds.; SPIE: Bellingham, WA, USA, 2016. [Google Scholar]
  46. Hopkinson, C.; Chasmer, L.; Sass, G.; Creed, I.; Sitar, M.; Kalbfleisch, W.; Treitz, P. Vegetation class dependent errors in lidar ground elevation and canopy height estimates in a boreal wetland environment. Can. J. Remote Sens. 2005, 31, 191–206. [Google Scholar] [CrossRef]
  47. Hopkinson, C. The influence of lidar acquisition settings on canopy penetration and laser pulse return characteristics. In Proceedings of the 2006 IEEE International Symposium on Geoscience and Remote Sensing, Denver, CO, USA, 31 July–4 August 2006; pp. 2420–2423. [Google Scholar]
  48. Hogg, A.R.; Holland, J. An evaluation of DEMs derived from LiDAR and photogrammetry for wetland mapping. For. Chron. 2008, 84, 840–849. [Google Scholar] [CrossRef] [Green Version]
  49. Lang, M.W.; Bourgeau-Chavez, L.L.; Tiner, R.W.; Klemas, V.V. Advances in remotely sensed data and techniques for wetland mapping and monitoring. In Remote Sensing of Wetlands: Applications and Advances; Tiner, R.W., Lang, M.W., Klemas, V.V., Eds.; CRC Press: Boca Raton, FL, USA, 2015; pp. 79–118. [Google Scholar]
  50. Madden, M.; Jordan, T.; Bernardes, S.; Cotten, D.L.; O’hare, N.K.; Pasqua, A. Unmanned Aerial Systems and Structure from Motion Revolutionize Wetlands Mapping. In Remote Sensing of Wetlands: Applications and Advances; CRC Press: Boca Raton, FL, USA, 2015; pp. 195–222. [Google Scholar]
  51. Natural Resources Canada. Public Safety Canada. Federal Airborne LiDAR Data Acquisition Guideline Version 1.0; Natural Resources Canada: Ottawa, ON, Canada, 2017. [Google Scholar] [CrossRef]
  52. Optech Titan Brochure. Available online: http://www.teledyneoptech.com/wp-content/uploads/Titan-Specsheet-150515-WEB.pdf (accessed on 28 February 2018).
  53. Optech Eclipse Brochure. Available online: http://www.teledyneoptech.com/wp-content/uploads/2017-06-28_Optech_Eclipse-Brochure_web.pdf (accessed on 28 February 2018).
  54. Klemas, V. Remote sensing of wetlands: Case studies comparing practical techniques. J. Coast. Res. 2011, 27, 418–427. [Google Scholar]
  55. Kempeneers, P.; Deronde, B.; Provoost, S.; Houthuys, R. Synergy of airborne digital camera and lidar data to map coastal dune vegetation. J. Coast. Res 2009, 53, 73–82. [Google Scholar] [CrossRef]
  56. Morsy, S. Land/Water Discrimination and Land Cover Classification Using Multispectral Airborne LiDAR Data. Ph.D. Thesis, Civil Engineering. Ryerson University, Toronto, ON, Canada, January 2017. [Google Scholar]
  57. Richardson, M.; Millard, K. Geomorphic and biophysical characterization of wetland ecosystems with airborne LiDAR: Concepts, methods and a case-study. In High Spatial Resolution Remote Sensing: Data, Techniques, and Applications; Yuhong, H., Ed.; CRC Press, Taylor & Francis Group: Boca Raton, FL, USA, 2018; pp. 1–39. [Google Scholar]
  58. Rapinel, S.; Hubert-Moy, L.; Clément, B. Combined use of LiDAR data and multispectral earth observation imagery for wetland habitat mapping. Int. J. Appl. Earth Obs. Geoinf. 2015, 37, 56–64. [Google Scholar] [CrossRef]
  59. Corrigan, F. 12 Top Lidar Sensors for UAVs and So Many Great Uses. Available online: https://leddartech.com/10-top-lidar-sensors-uavs-many-great-uses/ (accessed on 28 February 2018).
  60. Reif, M.; (Joint Airborne Lidar Bathymetry Technical Center of Expertise, Kiln, MI, USA). Personal communication, 2018.
  61. Klemas, V. Airborne remote sensing of coastal features and processes: An overview. J. Coast. Res. 2013, 29, 239–255. [Google Scholar] [CrossRef]
  62. Kalacska, M.; Lalonde, M.; Moore, T.R. Estimation of foliar chlorophyll and nitrogen content in an ombrotrophic bog from hyperspectral data: Scaling from leaf to image. Remote Sens. Environ. 2015, 169, 270–279. [Google Scholar] [CrossRef]
  63. Wilcox, D.A.; Bateman, J. Updated photointerpretation evaluation of water-level-regulation influences on Lake Ontario and Upper St. Lawrence River wetland plant communities. (Unpublished).
  64. Howard, T.G.; Feldmann, A.L.; Spencer, E.; Ring, R.R.; Perkins, K.A.; Corser, J. Wetland Monitoring for Lake Ontario Adaptive Management; Prepared for United States Environmental Protection Agency Assistance ID No. GL-00E00842-0. Project was funded by the Great Lakes Restoration Initiative; New York Natural Heritage Program: Albany, NY, USA, 2016. [Google Scholar]
  65. Hudon, C.; Wilcox, D.; Ingram, J. Modeling Wetland Plant Community Response to Assess Water-Level Regulation Scenarios in the Lake Ontario—St. Lawrence River Basin. Environ. Monit. Assess. 2006, 113, 303–328. [Google Scholar] [CrossRef] [PubMed]
  66. Tiner, R.W. Wetlands. Chapter 13. In Manual of Photographic Interpretation, 2nd ed.; Philipson, W., Ed.; American Society of Photogrammetry and Remote Sensing: Bethesda, MD, USA, 2007; pp. 475–494. [Google Scholar]
  67. Product Specification UX5. Available online: http://trl.trimble.com/docushare/dsweb/Get/Document-700668/022503-1205D_Trimble_UX5_DS_MarketSmart_0515_LR.pdf (accessed on 1 March 2018).
  68. Product Comparison UX5 and UX5 HP. Available online: http://trl.trimble.com/docushare/dsweb/Get/Document-778204/022503-1351A_UX5_&_UX5_HP_Product_Comparison_0516_LR_web.pdf (accessed on 1 March 2018).
  69. eBee Plus Product Web Site. Available online: https://www.sensefly.com/drone/ebee-plus-survey-drone/ (accessed on 1 March 2018).
  70. Dronova, I. Object-Based Image Analysis in Wetland Research: A Review. Remote Sens. 2015, 7, 6380–6413. [Google Scholar] [CrossRef] [Green Version]
  71. Haack, B.; Ryerson, R.A. Photogrammetric Engineering and Remote Sensing. In Training for Remote Sensing Image Interpretation; ASPRS: Bethesda, MD, USA, 2017; Volume 83, pp. 795–806. [Google Scholar]
  72. Mahdavi, S.; Salehi, B.; Granger, J.; Amani, M.; Brisco, B.; Huang, W. Remote sensing for wetland classification: A comprehensive review. GISci. Remote Sens. 2018, 55, 623–658. [Google Scholar] [CrossRef]
  73. Waske, B.; van der Linden, S.; Oldenburg, C.; Jakimow, B.; Rabe, A.; Hostert, P. ImageRF—A user-oriented implementation for remote sensing image analysis with random forests. Environ. Model. Softw. 2012, 35, 192–193. [Google Scholar] [CrossRef]
  74. Behnamian, A.; Millard, K.; Banks, S.N.; White, L.; Richardson, M.; Pasher, J. A Systematic Approach for Variable Selection with Random Forests: Achieving Stable Variable Importance Values. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1988–1992. [Google Scholar] [CrossRef] [Green Version]
Table 1. A description of the wetland vegetation classes included in this study.
Table 1. A description of the wetland vegetation classes included in this study.
Wetland Vegetation ClassDescription
Transition to uplandsConsists of shrub swamp and/or swamp forest areas, periodic standing water, and woody species which can withstand a range of flooding regimes are most common [2].
Meadow marshConsists of sedges, grasses, and forbes that are inundated with water for more than a few years and withstand some flooding [3]. Generally, has shallow, organic soils, but in some years is flooded for the entire growing season. During dry seasons seedlings and shrubs can start to grow [4].
Typha (cattail)A common species found in marsh. Dead stems from a previous growing season can be observed [5]. Typha latifolia, also referred to as common or broadleaf cattail, are native to North America. T. angustifolia, or narrow leaf cattail, are an invasive species commonly found in the Great Lakes. They are of concern because they reduce plant diversity [6,7] and change the community structure [8].
Miscellaneous mixed emergentA large variety of different marsh species not dominated by one species [9]. Usually permanently flooded with shallow water for the entire growing season, but can be dry, during years when lake levels are low [10].
Mixed emergentDefined as cattail invaded sedge-grass meadow marsh [11]. Usually permanently flooded with shallow water for the entire growing season, but can be dry, during years when lake levels are low [10].
Floating or submerged aquaticRooted vascular vegetation that is either floating or submerged [5].
Table 2. A list of the remote sensing sensors and processing approaches assessed in this review.
Table 2. A list of the remote sensing sensors and processing approaches assessed in this review.
SensorsProcessing Approaches
Synthetic aperture radar (SAR)Visual interpretation
Optical satellite data of three types (low-medium resolution imagery at 10–30 m, medium to high-resolution imagery at 3–5 m, and high-resolution imagery at better than 3 m)Supervised and unsupervised image analysis
Airborne Airborne Light Detection and Ranging (LiDAR)Object-based analysis
Airborne hyperspectral (space borne hyperspectral data are not routinely available)Machine learning analysis multisensor systems
Aerial photography (by airplane)
Aerial photography (by unmanned aerial vehicle (UAV) or drone)
Table 3. Summary of the systems and sensors to meet the requirements of the Great Lakes St. Lawrence Adaptive Management (GLAM) Committee. Nil indicates that the system or sensor would be unlikely to deliver the required accuracy given the minimum mapping unit (MMU). The Z value refers to vertical accuracy.
Table 3. Summary of the systems and sensors to meet the requirements of the Great Lakes St. Lawrence Adaptive Management (GLAM) Committee. Nil indicates that the system or sensor would be unlikely to deliver the required accuracy given the minimum mapping unit (MMU). The Z value refers to vertical accuracy.
Sensors, Platforms, and Processing ApproachesGLAM Wetland ClassesMeadow Marsh ChangesZ Value in cmPotential SurrogatesCost of Data and AnalysisLiterature Available to Support ClaimDifficulties and Comments
2 × 2 m MMU4 × 4 m MMU2 × 2 m MMU4 × 4 m MMU
Sensors and Platforms
Synthetic aperture radarNilNilNilNil3–10Yes?, L-BandHigh/HighYesResolution insufficient for MMU
Optical satellite 10–30 m resolutionNilNilNilNilNilNoFree/LowYesResolution insufficient for MMU
Optical satellite 3–9 m resolutionNilNilNilNilNilYes?Free/LowYesIndices and special processing may lead to a surrogate
Optical satellite resolution under 1 m multispectralLimitedYes?LimitedYes?NilYes?Free to highLimited
Airborne Lidar + multispectral or hyperspectralYesYesYesYes5–10NAHigh/HighYesFlying height and time of year will determine success in “z”
Airborne hyperspectralYesYesYesYesNilNAHigh/HighLimitedData is not easy to process
Aerial photography (by airplane) 8–12 cm resolution, colour IRYesYesYesYes?NAHigh/MediumYesWell-tested approach
Aerial photography (by UAV) 3–10 cmYesYesYesYes5–10NAMedium?Limitedz < 10 cm requires special drone
Processing Approaches
Visual interpretation (airborne images)YesYesYesYesNilNAMedium/LowYesWell-tested approach
Supervised and unsupervised image analysis with medium to high-resolution dataLimitedLimitedLimitedLimitedNilNoMedium/LowYesNot recommended for fine detail
Object-based image analysis (OBIA) for use with high-resolution satellite dataLimitedYes?LimitedYes?NilYesMedium/LowLimitedThis tool could lead to a surrogate for wider application
Machine learning analysis multisensor systemsLimitedLimitedLimitedLimitedNil??Limited

Share and Cite

MDPI and ACS Style

White, L.; Ryerson, R.A.; Pasher, J.; Duffe, J. State of Science Assessment of Remote Sensing of Great Lakes Coastal Wetlands: Responding to an Operational Requirement. Remote Sens. 2020, 12, 3024. https://doi.org/10.3390/rs12183024

AMA Style

White L, Ryerson RA, Pasher J, Duffe J. State of Science Assessment of Remote Sensing of Great Lakes Coastal Wetlands: Responding to an Operational Requirement. Remote Sensing. 2020; 12(18):3024. https://doi.org/10.3390/rs12183024

Chicago/Turabian Style

White, Lori, Robert A. Ryerson, Jon Pasher, and Jason Duffe. 2020. "State of Science Assessment of Remote Sensing of Great Lakes Coastal Wetlands: Responding to an Operational Requirement" Remote Sensing 12, no. 18: 3024. https://doi.org/10.3390/rs12183024

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop