Deep Lake Explorer: A web application for crowdsourcing the classification of benthic underwater video from the Laurentian Great Lakes

https://doi.org/10.1016/j.jglr.2020.07.009Get rights and content

Abstract

Underwater video is increasingly used to study aspects of the Great Lakes benthos including the abundance of round goby and dreissenid mussels. The introduction of these species has resulted in major ecological shifts in the Great Lakes, but the abundance and impacts of these species have heretofore been underassessed due to limitations of monitoring methods. Underwater video (UVID) can “sample” hard bottom sites where grab samplers cannot. Efficient use of UVID data requires affordable and accurate classification and analysis tools. Deep Lake Explorer (DLE) is a web application developed to support crowdsourced classification of UVID collected in the Great Lakes. Volunteers (i.e., the crowd) used DLE to classify 199 videos collected in the Niagara River, Lake Huron, and Lake Ontario for the presence of round gobies, dreissenid mussels, or aquatic vegetation, and for dominant substrate type. We compared DLE classification results to expert classification of the same videos to evaluate accuracy. DLE had the lowest agreement with expert classification for hard substrate (77%), and highest agreement for vegetation presence (90%), with intermediate agreement for round goby and mussel presence (89% and 79%, respectively). Video quality in the application, video processing, abundance of species of interest, volunteer experience, and task complexity may have affected accuracy. We provide recommendations for future crowdsourcing projects like DLE, which can increase timeliness and decrease costs for classification but may come with tradeoffs in accuracy and completeness.

Introduction

The application of underwater videography (UVID) to complement traditional methods for assessing the benthic conditions in the Great Lakes ecosystem has increased in the past decade (Karatayev et al., 2018). UVID can provide data in areas with rocky bottom where grab sampling is difficult or impossible (Lietz et al., 2015, Karatayev et al., 2018). Video is especially useful in Great Lakes connecting channels like the Niagara River, which can have swift currents and extensive areas of rocky substrate (Mehler et al., 2018). Benthic UVID can supply information about native and invasive demersal fishes and invertebrates, substrate, habitat patchiness, plant communities, anthropogenic features, and species interactions and behavior that is generally not obtainable from grab samples.

In the Great Lakes, UVID has been primarily used to assess the status of the invasive round goby (Neogobius melanostomus), and zebra (Dreissena polymorpha) and quagga mussels (Dreissena rostriformis bugensis). Invasions by Dreissena and round gobies have profoundly affected Great Lakes benthic ecosystems. Dreissena alters physical habitat for other benthic invertebrates and increases sedimentation and mineralization. Their filter feeding affects nutrient cycling, increases water clarity, and is a trophic link between the pelagic and benthic components of the Great Lake ecosystems (e.g., Mayer et al., 2014). The Dreissena invasion has been associated with shifts in phytoplankton density and community composition, declines in the native amphipod, Diporeia spp., and declines in native unionid and sphaeriid bivalves, oligochaetes, and chironomids (Burlakova et al., 2014, Karatayev et al., 2015).

Round gobies can outcompete native demersal fish species for habitat and prey and may consume native species’ eggs and fry (Kornis et al., 2012). Gobies consume Dreissena (Lederer et al., 2008) and are themselves consumed by piscivorous fish (Madenjian et al., 2011). There is evidence of the transfer of contaminants filtered from water by Dreissena from gobies to higher trophic levels (Johnson et al., 2005). Unfortunately, both round goby and Dreissena are often not reliably sampled using conventional methods (e.g., trawling and PONAR grab sampler, respectively). Researchers have used UVID to quantify Dreissena coverage, density, and biomass (Ozersky et al., 2011, Karatayev et al., 2018), and as an early detection monitoring tool for invasive species (Trebitz et al., 2019).

Underwater video is an affordable and efficient means of sampling and assessing benthic species and habitats across the Great Lakes. Cameras and associated gear are relatively inexpensive (e.g., compared to divers), require little maintenance, and video can be collected in under five minutes by lowering a camera and lights mounted on a carriage to the bottom. UVID was tested as an assessment tool in the Great Lakes nearshore as part of the EPA’s National Coastal Condition Assessment in 2010 (Lietz et al., 2015). Video paired with grab sampling improved detection of Dreissena over grab sampling alone. Even so, as many as 45% of the videos collected had poor or marginal image quality. Fortunately, improvements in cameras, gear, and sampling methods, have increased the quality of UVID collected in lake-wide benthic assessments since 2010.

Although video quality, defined broadly as features of video footage that affect interpretation (e.g., resolution, brightness/darkness, contrast, color accuracy, focus, depth of field, content), has improved over early efforts, video classification remains challenging (see Videos 1, 2, 3, 4 and 5 in Electronic Supplementary Material (ESM) Appendix S1 for examples). Consistent and accurate video interpretation is difficult because of variation in water clarity, ambient lighting, bottom relief, amount of current, and movement of organisms (Salman et al., 2016). Like most sampling methods, detecting organisms or objects at low abundance is difficult (e.g., recent Dreissena invasions, Trebitz et al., 2019). Where invaders are still rare, there is a low probability of a video being collected at occupied sites, and a high probability that they will be overlooked, especially in visually complex habitats. To address these challenges and capitalize on the efficiency and affordability of UVID as an assessment tool, affordable and accurate UVID classification tools are essential.

Crowdsourcing video classification has potential to cost-effectively address these challenges by engaging the public (e.g., Swanson et al., 2016, Happel et al., 2020). Crowdsourcing allows multiple volunteers to classify each video, resulting in replicate classifications and consensus estimates (e.g., Good and Su, 2013, Swanson et al., 2016). Ideally, crowdsourcing can increase timeliness of reporting, reduce classification costs, and facilitate accurate analysis of large datasets.

We developed a web application, the Deep Lake Explorer (DLE), to explore the utility of crowdsourcing for classification of UVID collected in the Great Lakes. In this paper, we describe the development and implementation of DLE, evaluate its accuracy, and discuss its limitations and potential as a tool for UVID classification.

Section snippets

Video collection

Underwater video was collected in 2017–2018 in the Niagara River, Lake Huron, and Lake Ontario. We collected UVID in the Niagara River (Fig. 1) as part of a National Coastal Condition Assessment (NCCA) pilot study of Great Lakes connecting channels. In Lake Huron (2017) and in Lake Ontario (2018, Fig. 1), we collected UVID as part of the Cooperative Science and Monitoring Initiative (CSMI, ECCC and USEPA, 2018), lakewide assessments in the Great Lakes coordinated by USEPA. Video survey and

Participation in deep Lake Explorer

Within two weeks of launching, 531 volunteers had classified all 746 UVID clips uploaded to DLE. Ten volunteers classified more than 100 clips each; 61 volunteers classified 21–100 clips each, and 460 volunteers classified 20 or fewer clips each. Six percent of volunteers commented in the forum and left 192 comments or questions. Comments included observations about video content (61%), questions about the project and the workflow (13%), troubleshooting questions or comments (7%), suggestions

Selecting appropriate volunteer agreement thresholds

We used volunteer agreement thresholds based on agreement of DLE classification with experts to determine the result for each clip or video. The videos classified in DLE represent the range of benthic conditions within the larger Great Lakes, so the thresholds used here could be applied for future classification of similar attributes in Great Lakes videos. However, the variation in thresholds among attributes (e.g., 30% for round gobies; 50% for substrate type) suggests that other attributes

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgements

We thank Knut Mehler, Jill Scharold, Dustin Woodruff, and the captains and crews of the US EPA R/V Lake Explorer II, Lake Guardian and Mudpuppy II, who all assisted in data collection; the participants in the DLE advisory group, and Paul Skawinski and Rachael Mady who provided reviews of the manuscript. We thank all volunteers who participated in Deep Lake Explorer, especially SirUrielPerpetua, Tomburgerpie, and an anonymous user who each classified over 350 clips. EPA Office of Research and

References (50)

  • L.E. Burlakova et al.

    Competitive replacement of invasive congeners may relax impact on native species: interactions among zebra, quagga, and native unionid mussels

    PLoS ONE

    (2014)
  • A.W. Crall et al.

    Assessing citizen science data quality: an invasive species case study

    Conserv. Lett.

    (2011)
  • J. Cox et al.

    Defining and measuring success in online citizen science: A case study of Zooniverse projects

    Comput. Sci. Eng.

    (2015)
  • Di Salvo, R., Giordano, D., Kavasidis, I., 2013. A crowdsourcing approach to support video annotation. VIGTA ‘13...
  • J.L. Dickinson et al.

    The current state of citizen science as a tool for ecological research and public engagement

    Front. Ecol. Environ.

    (2012)
  • ECCC (Environment and Climate Change Canada) and USEPA (U.S. Environmental Protection Agency), 2018. Lake Huron...
  • Eveleigh, A., Jennett, C., Blandford, A., Brohan, P., Cox, A.L., 2014. Designing for dabblers and deterring drop-outs...
  • FFmpeg Developers, 2016. ffmpeg tool (Version be1d324) [Software]. Available from...
  • M.C. Fitzpatrick et al.

    Observer bias and the detection of low-density populations

    Ecol. Appl.

    (2009)
  • H. Garcia-Molina et al.

    Challenges in data crowdsourcing

    IEEE Trans. Knowl. Data Eng.

    (2016)
  • M.M. Gardiner et al.

    Lessons from lady beetles: accuracy of monitoring data from US and UK citizen-science programs

    Front. Ecol. Environ.

    (2012)
  • B.M. Good et al.

    Crowdsourcing for bioinformatics

    Bioinformatics

    (2013)
  • Jiménez, M., 2018. A first approach for handling uncertainty in citizen science. In: 2018 IEEE International Conference...
  • Kamar, E., Hacker, S., Horvitz, E., 2012. Combining human and machine intelligence in large-scale crowdsourcing. In...
  • A.Y. Karatayev et al.

    Zebra versus quagga mussels: a review of their spread, population dynamics, and ecosystem impacts

    Hydrobiologia

    (2015)
  • Cited by (6)

    View full text