Next Article in Journal
Prediction of Children’s Blood Lead Levels from Exposure to Lead in Schools’ Drinking Water—A Case Study in Tennessee, USA
Next Article in Special Issue
Coastal Flooding Risk Assessment Using a GIS-Based Spatial Multi-Criteria Decision Analysis Approach
Previous Article in Journal
The Ecosystem Resilience Concept Applied to Hydrogeological Systems: A General Approach
Previous Article in Special Issue
A Proposed Simultaneous Calculation Method for Flood by River Water, Inland Flood, and Storm Surge at Tidal Rivers of Metropolitan Cities: A Case Study of Katabira River in Japan
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Image Segmentation Methods for Flood Monitoring System

by
Nur Atirah Muhadi
1,*,
Ahmad Fikri Abdullah
1,2,
Siti Khairunniza Bejo
1,
Muhammad Razif Mahadi
1 and
Ana Mijic
3
1
Department of Biological and Agricultural Engineering, Faculty of Engineering, Universiti Putra Malaysia, Serdang 43400, Malaysia
2
International Institute of Aquaculture and Aquatic Sciences, Si Rusa 71050, Malaysia
3
Department of Civil and Environmental Engineering, Skempton Building, Imperial College London, South Kensington Campus, London SW7 2AZ, UK
*
Author to whom correspondence should be addressed.
Water 2020, 12(6), 1825; https://doi.org/10.3390/w12061825
Submission received: 16 April 2020 / Revised: 9 May 2020 / Accepted: 10 May 2020 / Published: 26 June 2020

Abstract

:
Flood disasters are considered annual disasters in Malaysia due to their consistent occurrence. They are among the most dangerous disasters in the country. Lack of data during flood events is the main constraint to improving flood monitoring systems. With the rapid development of information technology, flood monitoring systems using a computer vision approach have gained attention over the last decade. Computer vision requires an image segmentation technique to understand the content of the image and to facilitate analysis. Various segmentation algorithms have been developed to improve results. This paper presents a comparative study of image segmentation techniques used in extracting water information from digital images. The segmentation methods were evaluated visually and statistically. To evaluate the segmentation methods statistically, the dice similarity coefficient and the Jaccard index were calculated to measure the similarity between the segmentation results and the ground truth images. Based on the experimental results, the hybrid technique obtained the highest values among the three methods, yielding an average of 97.70% for the dice score and 95.51% for the Jaccard index. Therefore, we concluded that the hybrid technique is a promising segmentation method compared to the others in extracting water features from digital images.

1. Introduction

Rapid urbanization, population growth, and extreme climate change have led to frequent flooding events, posing challenges for many countries around the world. The impacts of floods include loss of life, damage to property, and deterioration of health conditions. Several researchers studied the effects of flooding on agricultural land [1,2,3], finding that such events may reduce agricultural production and eventually influence the economic performance of a country [4]. For instance, changes in temperature and rainfall during flood events may influence soybean growth and yield [5]. They can also destroy agricultural infrastructure and increase the risk of diseases. In the aquaculture field, extreme floods can damage farming infrastructure, such as sluices and shutters, as well as introduce diseases and harmful algal blooms [6].
Due to the impacts of flooding, researchers have intensified their efforts to improve and enhance flood monitoring systems. Flood monitoring is the process of monitoring the conditions that lead to flood events, such as the level of a river, so that authorities can take appropriate actions in advance. Quickly predicting flood disasters is important to prevent severe damage to property and life. The lack of data availability during flood events is the main constraint on developing a reliable flood monitoring system.
Several flood event monitoring methods are available, such as gauge sensors and remote sensing technology [7,8,9,10,11,12,13]. However, gauge sensors can only provide one spatial dimension (water level), whereas remote sensing technology experiences issues with satellite revisit time. The retrieval of information may be delayed for hours depending on the method of data transmission [14]. Currently, flood warning analysis relies on water level sensors and precipitation forecasts; therefore, it is not capable of providing near-real-time and automated flood monitoring analysis [15]. As such, visual sensing systems have been introduced that have the capability to collect a vast quantity of information from a given area. Important information for many applications can be obtained from still images and video streams. The interest in visual monitoring and surveillance systems, especially in natural disaster applications, has increased with advancing surveillance technology. It is now possible to recognize events in real time with the progress in information technology [16]. For instance, Mettes et al. [17] worked on the motion of water properties using videos. Image-based approaches, combined with the appropriate image analysis techniques, may offer efficient and cost-effective methods that may be useful for managing flood events [18]. An early warning system for flood prevention and monitoring is one of the applications of visual sensing technology.
Image processing is the use of computer algorithms to extract useful information from digital images, which is a vital procedure in visual sensing systems. To understand the content of an image, image segmentation is commonly used to partition it into several regions, which are often based on the attributes of its pixels. In particular, image segmentation has been applied in the fields of medical imaging, automated driving, and water management. In flood disaster applications, it may involve the separation of the foreground (in this case, the water features) from the background. Several image segmentation techniques are currently used by researchers and industry, such as thresholding, boundary-based, region-based, and hybrid techniques [19,20]. Some papers specifically discussed image segmentation methods handcrafted for flood disaster applications.
Generally, the choice of features is crucial in image segmentation algorithms. Nath and Deb [21] stated that one of the most promising features of digital images is color information. The other commonly used feature is texture or pattern information. Many researchers used these main features to identify flood events. For the detection of a flooding event, Lai and Chen [22] employed threshold values to determine the potential foreground regions. Borges et al. [23] introduced a probabilistic model for flood detection in videos. They combined the statistical characteristics of floods, such as color, texture, and saturation characteristics, using the Bayes classifier along with frame-to-frame changes to determine the flood presence. They then proposed a probabilistic model of flood occurrence to detect the position of flood regions, thereby significantly improving the classification performance.
San Miguel and Ruiz Jr. [24] conducted a study similar to the work of Borges et al. [23] using the thresholding method to segment the flood and non-flood regions depending on color, size, and patterns of ripples. This approach offers good flood detection capabilities but is limited by the reflections on the floodwater. Filonenko et al. [14] proposed the use of a color probability method from images obtained from a video to detect floods in real time. Lo et al. [25] employed the region growing method for flood region detection, finding it more suitable for situations in which the background and foreground shapes change over time. Therefore, all shape and size variations across the flood regions can be detected. The authors also proposed a region-based segmentation method of differentiating the foreground and background. Jyh-Horng et al. [26] proposed mean-shift and region growing approaches to develop an automated identification method for flood monitoring. The region growing algorithm is specifically used to differentiate an object within a binary mask.
Geetha et al. [27] used crowd-sourced images to detect the extent of flood areas using a color-based segmentation threshold. Zhang et al. [28] evaluated the performance of three different image processing techniques for flood monitoring systems. Their experimental results indicated that the canny edge detection method obtained a high level of accuracy in detecting the flooded area. Langhammer and Vacková [29] used the seed region growing algorithm to perform object-based image segmentation in classifying the image into several categories.
From the cited literature, various algorithms have been developed to improve segmentation results. At present, there is no general solution to image segmentation problems that ensures reliable accuracy for flood disaster applications. Therefore, this paper presents a comparative study of image segmentation techniques—namely thresholding, region growing, and a hybrid technique—to extract water information from digital images.

2. Methodology

Based on previous studies, thresholding, region growing, and hybrid techniques are three promising image processing techniques with potential applications in flood monitoring systems. Therefore, they were selected for use in this study. The study area is located in Batu 12, Sungai Langat, Selangor, Malaysia. Images captured by a surveillance camera installed near a river were used as the input. The outdoor environment was used to more realistically test the image segmentation methods. Two different images were used: one showing normal water level conditions and the other showing overflowed water conditions. The ground truth data was manually segmented and then used to evaluate the accuracy of the tested algorithms. The images were segmented into foreground (water) and background using the thresholding, region growing, and hybrid techniques. The experiments were performed on a notebook that was equipped with a 2.60 GHz Intel® Core™ i7 CPU and 16 GB RAM. This work was implemented using MATLAB 9.8.0 (R2020a). The segmentation techniques are discussed in detail below. An illustration of the overall procedure is shown in Figure 1.

2.1. Thresholding

Thresholding was used to segment the foreground from the background by selecting a threshold value, Th. Any pixel with a value less than or equal to the threshold value was selected as a part of the foreground, whereas any value higher than the threshold value was included in the background. The threshold value was selected by observing the histograms of the flooding images. The output image g(x,y) was obtained from the original image f(x,y) as:
g ( x , y ) = { 1 ,   i f   f ( x , y ) T h 0 ,   i f   f ( x , y ) > T h
where T h a threshold is value and ( x , y ) is the coordinate of the threshold.

2.2. Region Growing

Region growing, also known as region-based segmentation, groups pixels in the image into regions with similar characteristics. First, an initial pixel is selected in the region of interest, which is also known as a seed point. The attributes of the seed point are compared with the neighboring pixels. If the neighboring pixels have similar criteria to the seed point, the pixels are considered as belonging in the same region. The size of this region grows with the inclusion of the nearby pixels. Figure 2a–f illustrate the process of region growing using a 4-connected neighborhood given a threshold value of three.
In this study, this technique appointed a seed point in the flooding area from which the region grew as pixels with similar features were grouped with it. The threshold value refers to the difference between the intensity value of the pixel and the mean of the segmented region, which was used as the measure of similarity in this study. If the difference between the new pixel and the region was smaller than the threshold value, the pixel was included in the region. In contrast, if the difference was larger than the threshold value, the new pixel was not included in the region. The 4-connected neighborhood was used to grow the neighboring pixels from the initial seed point. The iteration process stopped when all the pixels in the image were tested.

2.3. Hybrid Technique

Different methods of image segmentation may perform better for different types of images. The hybrid technique, consisting of multiple methods of image segmentation, can thus improve segmentation results [30]. We used the hybrid technique proposed by Lankton et al. [31] to detect the flooding area by combining geodesic active contours and region-based active contours. This method is capable of segmenting images with poor edge definition or that lack a homogeneous intensity profile.
Active contour methods start with an initial curve and a definition of the energy for that curve based on its geometry and image data. Energy based on geometry is provided to maintain the smoothness of the curve, whereas energy based on image data is aimed at attracting the contour to object boundaries. The curve is deformed to increase and decrease the energy, thus moving the curve toward the local maxima and minima. The hybrid technique begins with the initial curve, then moves each point on the curve based on the analysis of local interior and exterior regions. According to Lankton et al. [31], at each point of the true edge of an object, the nearby points inside and outside the object will be modeled by the mean intensities of the local regions. Figure 3 shows the initial curve in the study area.

2.4. Segmentation Evaluation

Image segmentation algorithms must be evaluated to assess their efficiency and effectiveness. Generally, segmentation evaluation is categorized into two major classes: subjective and objective methods. The most common method for evaluating the effectiveness of a segmentation method is subjective evaluation, in which segmentation results are judged by a human evaluator [32]. Objective methods do not involve human assumptions and assessments. They are widely used to quantitatively measure the performance of the segmented image. Quantitative evaluation methods are used to analyze the similarity between the results of segmentation algorithms and ground images generated by a human expert. The Jaccard index, dice similarity coefficient (DSC), precision, and recall are some examples of metrics that are widely used and accepted by the scientific community to evaluate the accuracy of segmentation methods.
We used the DSC and Jaccard index to assess the performance of the three segmentation methods. DSC operates on binary data and is frequently used as a statistical validation metric in segmentation evaluation. Its value ranges from 0%, which indicates there is no spatial overlap between the two sets of binary segmentation results, to 100%, which indicates complete overlap. DSC is a spatial overlap index that can be defined as:
DSC = 2 | A B | ( | A | + | B | )
where |A| and |B| are the cardinal of set A and set B, respectively.
The Jaccard index, also known as the intersection-over-union (IoU) metric, is another commonly used metric for evaluating semantic segmentation results. It expresses the number of objects two sets have in common as a percentage of the number of objects they have in total. In other words, it shows the size of the intersection divided by the size of the union of the sample sets. The metric ranges from 0% to 100%, with 0% indicating no overlap between the two segmented regions and 100% signifying a perfectly overlapping segmentation. The mathematical representation of the Jaccard index is:
Jaccard = | A B | | A B |
where |A| and |B| are the cardinal of set A and set B, respectively.

3. Results and Discussion

We used images captured by surveillance cameras installed near a river. The study area is located in Batu 12, Sungai Langat, Selangor, Malaysia. Two different images were used—one during normal conditions and the other during overflowed conditions—to examine the practicality of each segmentation method. The images had a resolution of 1280 × 720 and the ground truth images were manually segmented. The original photographic images and the ground truth images are shown in Figure 4.

3.1. Qualitative Evaluation

The results of the three segmentation methods were compared with the manually segmented ground truth images. We did not require very detailed segmentation, especially inside the river area; the overall purpose of the segmentation was to precisely detect the borderline of the river. The comparisons between the segmentation results and the ground truth images are shown in Figure 5.
The green and magenta regions show areas where the segmentation results differed from the expected ground truth. Based on Figure 5a, the hybrid technique provided the nearest segmentation results to those of the manually segmented ground truth image. Its detection of the water edge was almost identical to that of the ground truth image. The region growing method was able to segment the images into foreground and background, though there were some major spots where it mistakenly classified the water as background and vice versa, as shown in Figure 5b. This was also the case with the thresholding technique. Figure 5c shows that the thresholding method labeled part of the ground as water due to the similarity in color between the river water and the bare soil.

3.2. Quantitative Evaluation

A quantitative comparison was performed between the ground truth and the segmentation results using the DSC and Jaccard index metrics. Both metric results ranged from 0% to 100%, with 0% indicating that there was no overlap between the two segmented regions and 100% indicating a perfect match between the two segmentations. The evaluations of segmented images using DSC are listed in Table 1.
The hybrid technique obtained the highest dice score, followed by region growing and thresholding for both normal and overflowed conditions. The dice scores for the hybrid technique approached 100%, indicating that almost the entire segmented images overlapped with the ground truth images. The average dice score of hybrid technique for both conditions was 97.70%.
The Jaccard index was also calculated to analyze the similarity between the segmentation algorithm results and the ground truth data. The results of the Jaccard index are shown in Table 2.
Based on the Jaccard index, the hybrid technique obtained the highest values for both water conditions followed by region growing and thresholding. The average Jaccard index of the hybrid technique for both conditions was 95.51%. The thresholding method performed the worst out of the three techniques because the algorithm mistakenly classified the ground as water due to the similarity of color between the river water and bare soil. This result implies that the thresholding method is strongly dependent on image characteristics, such as lighting conditions [33].
Overall, the hybrid technique obtained higher values for both the Jaccard index and the dice score compared to the region growing and thresholding methods, and obtained results higher than 95% on average. The results demonstrated that the hybrid technique performed well among the three techniques evaluated as it combined more than one technique. In this case, two techniques were used to segment the image. Hence, it can offer better results compared to other segmentation methods [31].

4. Conclusions

Over the last decade, researchers have increasingly employed computer vision approaches to improve flood detection and monitoring systems, and thus to reduce the severe impacts of flood disasters. Computer vision captures and processes images using segmentation techniques to understand their content. This paper presented a comparison of three segmentation methods for extracting information from digital images for flood monitoring systems. Thresholding, region growing, and hybrid techniques were compared on the basis of visual evaluation and statistical accuracy. Based on the experimental results, these techniques were all capable of extracting water information from the image. However, the hybrid technique was found to be the most promising image processing technique for extracting water features from digital images, with segmentation evaluation results higher than 95% on average. One problem identified with the use of these segmentation techniques is the need to change the algorithms when using different images. Semantic segmentation is another method to identify real-time events [34]. Future research may be planned to study deep learning techniques in order to develop more advanced segmentation methods.

Author Contributions

N.A.M.: conceptualization, methodology, writing—original draft preparation, software. A.F.A.: writing—review, editing, validation, project administration. S.K.B.: conceptualization, software, visualization. M.R.M.: visualization, supervision. A.M.: supervision. All authors have read and agreed to the published version of the manuscript.

Funding

This study was funded by Universiti Putra Malaysia under Geran Putra Berimpak (UPM/800-3/3/1/GPB/2019/9678700).

Acknowledgments

The authors appreciate the support for this study from the Universiti Putra Malaysia and the Institute of Aquaculture and Aquatic Sciences.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Goyari, P. Flood damages and sustainability of agriculture in Assam. Econ. Political Wkly. 2005, 40, 2723–2729. [Google Scholar]
  2. Brémond, P.; Grelot, F. Review Article: Economic evaluation of flood damage to agriculture—Review and analysis of existing methods. Nat. Hazards Earth Syst. Sci. 2013, 13, 2493–2512. [Google Scholar] [CrossRef]
  3. Muhadi, N.A.; Abdullah, A.F.; Vojinovic, Z. Estimating Agricultural Losses using Flood Modeling for Rural Area. MATEC Web Conf. 2017, 103, 4009. [Google Scholar] [CrossRef] [Green Version]
  4. Muhadi, N.; Abdullah, A.F. Flood damage assessment in agricultural area in Selangor River Basin. J. Teknol. 2015, 76, 111–117. [Google Scholar] [CrossRef] [Green Version]
  5. Ahmadzadeh Araji, H.; Wayayok, A.; Massah Bavani, A.; Amiri, E.; Abdullah, A.F.; Daneshian, J.; Teh, C.B.S. Impacts of climate change on soybean production under different treatments of field experiments considering the uncertainty of general circulation models. Agric. Water Manag. 2018, 205, 63–71. [Google Scholar] [CrossRef]
  6. Abery, N.W.; Hai, N.; Hao, N.; Minh, T.; Phuong, N.; Sumnongsong, S.; Dulyapurk, V.; Kaewnern, M.; Nagothu, U.; De Silva, S. Perception of Climate Change Impacts and Adaptation of Shrimp Farming in Ca Mau and Bac Lieu, Vietnam: Farmer Focus Group Discussions and Stakeholder Workshop Report. 2009. Available online: http://webcache.googleusercontent.com/search (accessed on 17 March 2020).
  7. García-pintado, J.; Mason, D.C.; Dance, S.L.; Cloke, H.L.; Neal, J.C.; Freer, J.; Bates, P.D. Satellite-supported flood forecasting in river networks: A real case study. J. Hydrol. 2015, 523, 706–724. [Google Scholar] [CrossRef] [Green Version]
  8. Notti, D.; Giordan, D.; Cal, F.; Pepe, A.; Zucca, F.; Galve, J.P. Potential and Limitations of Open Satellite Data for Flood Mapping. Remote Sens. 2018, 10, 1673. [Google Scholar] [CrossRef] [Green Version]
  9. Pekel, J.F.; Vancutsem, C.; Bastin, L.; Clerici, M.; Vanbogaert, E.; Bartholomé, E.; Defourny, P. A near real-time water surface detection method based on HSV transformation of MODIS multi-Spectral time series data. Remote Sens. Environ. 2014, 140, 704–716. [Google Scholar] [CrossRef] [Green Version]
  10. Pulvirenti, L.; Chini, M.; Pierdicca, N.; Guerriero, L.; Ferrazzoli, P. Flood monitoring using multi-temporal COSMO-skymed data: Image segmentation and signature interpretation. Remote Sens. Environ. 2011, 115, 990–1002. [Google Scholar] [CrossRef]
  11. Rokni, K.; Ahmad, A.; Selamat, A.; Hazini, S. Water feature extraction and change detection using multitemporal landsat imagery. Remote Sens. 2014, 6, 4173–4189. [Google Scholar] [CrossRef] [Green Version]
  12. Schumann, G.J.P.; Neal, J.C.; Mason, D.C.; Bates, P.D. The accuracy of sequential aerial photography and SAR data for observing urban flood dynamics, a case study of the UK summer 2007 floods. Remote Sens. Environ. 2011, 115, 2536–2546. [Google Scholar] [CrossRef]
  13. Skakun, S.; Kussul, N.; Shelestov, A.; Kussul, O. Flood Hazard and Flood Risk Assessment Using a Time Series of Satellite Images: A Case Study in Namibia. Risk Anal. 2014, 34, 1521–1537. [Google Scholar] [CrossRef]
  14. Filonenko, A.; Hernández, D.C.; Seo, D.; Jo, K.-H. Real-time flood detection for video surveillance. In Proceedings of the IECON 2015—41st Annual Conference of the IEEE Industrial Electronics Society, Yokohama, Japan, 9–12 November 2015; pp. 4082–4085. [Google Scholar]
  15. Menon, K.P.; Kala, L. Detection and Mobile App for Flood Alert. In Proceedings of the IEEE 2017 International Conference on Computing Methodologies and Communication, Erode, India, 18–19 July 2017; pp. 515–519. [Google Scholar]
  16. Sanmiguel, J.C.; Martínez, J.M. A semantic-based probabilistic approach for real-time video event recognition. Comput. Vis. Image Underst. 2012, 116, 937–952. [Google Scholar] [CrossRef] [Green Version]
  17. Mettes, P.; Tan, R.T.; Veltkamp, R.C. Water detection through spatio-temporal invariant descriptors. Comput. Vis. Image Underst. 2017, 154, 182–191. [Google Scholar] [CrossRef] [Green Version]
  18. Creutin, J.D.; Muste, M.; Bradley, A.A.; Kim, S.C.; Kruger, A. River gauging using PIV techniques: A proof of concept experiment on the Iowa River. J. Hydrol. 2003, 277, 182–194. [Google Scholar] [CrossRef]
  19. Adams, R.; Bischof, L. Seeded region growing. IEEE Trans. Pattern Anal. Mach. Intell. 1994, 16, 641–647. [Google Scholar] [CrossRef] [Green Version]
  20. Shih, F.Y.; Cheng, S. Automatic seeded region growing for color image segmentation. Image Vis. Comput. 2005, 23, 877–886. [Google Scholar] [CrossRef]
  21. Nath, R.K.; Deb, S.K. Water-Body Area Extraction from High Resolution Satellite Images-An Introduction, Review, and Comparison. Int. J. Image Process. 2010, 3, 353–372. [Google Scholar]
  22. Lai, C.L.; Yang, J.C.; Chen, Y.H. A real time video processing based surveillance system for early fire and flood detection. In Proceedings of the 2007 IEEE Instrumentation & Measurement Technology Conference IMTC 2007, Warsaw, Poland, 1–3 May 2007; pp. 1–6. [Google Scholar]
  23. Borges, P.V.K.; Mayer, J.; Izquierdo, E. A probabilistic model for flood detection in video sequences. In Proceedings of the 2008 15th IEEE International Conference on Image Processing, San Diego, CA, USA, 12–15 October 2008; pp. 13–16. [Google Scholar]
  24. San Miguel, M.J.P.; Ruiz, C.R. A flood detection and warning system based on video content analysis. In Proceedings of the International Symposium on Visual Computing, Las Vegas, NV, USA, 12–14 December 2006; Springer: Berlin/Heidelberg, Germany, 2016; Volume 8034, pp. 65–74. [Google Scholar]
  25. Lo, S.-W.; Wu, J.-H.; Lin, F.-P.; Hsu, C.-H. Cyber surveillance for flood disasters. Sensors 2015, 15, 2369–2387. [Google Scholar] [CrossRef] [Green Version]
  26. Jyh-Horng, W.; Chien-Hao, T.; Lun-Chi, C.; Shi-Wei, L.; Fang-Pang, L. Automated Image Identification Method for Flood Disaster Monitoring In Riverine Environments: A Case Study in Taiwan. In Proceedings of the AASRI International Conference on Industrial Electronics and Applications (IEA 2015), London, UK, 27–28 June 2015; Atlantis Press: Paris, France, 2015. [Google Scholar]
  27. Geetha, M.; Manoj, M.; Sarika, A.S.; Mohan, M.; Rao, S.N. Detection and estimation of the extent of flood from crowd sourced images. In Proceedings of the 2017 IEEE International Conference on Communication and Signal Processing (ICCSP), Melmaruvathur, India, 6–8 April 2017; pp. 603–608. [Google Scholar]
  28. Zhang, Q.; Jindapetch, N.; Duangsoithong, R.; Buranapanichkit, D. Investigation of Image Processing based Real-time Flood Monitoring. In Proceedings of the 2018 IEEE 5th International Conference on Smart Instrumentation, Measurement and Application (ICSIMA), Songkhla, Thailand, 27–28 November 2018; pp. 1–4. [Google Scholar]
  29. Langhammer, J.; Vacková, T. Detection and Mapping of the Geomorphic Effects of Flooding Using UAV Photogrammetry. Pure Appl. Geophys. 2018, 175, 3223–3245. [Google Scholar] [CrossRef]
  30. Khan, M.W. A survey: Image segmentation techniques. Int. J. Future Comput. Commun. 2014, 3, 89–93. [Google Scholar] [CrossRef] [Green Version]
  31. Lankton, S.; Nain, D.; Yezzi, A.; Tannenbaum, A. Hybrid geodesic region-based curve evolutions for image segmentation. In Medical Imaging 2007: Physics of Medical Imaging; International Society for Optics and Photonics: San Diego, CA, USA, 2007; Volume 6510, p. 65104. [Google Scholar]
  32. Zhang, H.; Fritts, J.E.; Goldman, S.A. Image segmentation evaluation: A survey of unsupervised methods. Comput. Vis. Image Underst. 2008, 110, 260–280. [Google Scholar] [CrossRef] [Green Version]
  33. Thenkabail, P.S. Remote Sensing of Water Resources, Disasters, and Urban Studies; CRC Press: Boca Raton, FL, USA, 2015; ISBN 1482217929. [Google Scholar]
  34. Leo, M.; Medioni, G.; Trivedi, M.; Kanade, T.; Farinella, G.M. Computer vision for assistive technologies. Comput. Vis. Image Underst. 2017, 154, 1–15. [Google Scholar] [CrossRef]
Figure 1. The overall procedure of this study.
Figure 1. The overall procedure of this study.
Water 12 01825 g001
Figure 2. The region growing process using a 4-connected neighborhood. (a) The red circle indicates the initial pixel in the region of interest. (b,c) The blue color represents the pixels that are considered to belong in the same region as the initial pixel. (df) The green color indicates pixels that are excluded from the region because the difference is larger than the threshold value.
Figure 2. The region growing process using a 4-connected neighborhood. (a) The red circle indicates the initial pixel in the region of interest. (b,c) The blue color represents the pixels that are considered to belong in the same region as the initial pixel. (df) The green color indicates pixels that are excluded from the region because the difference is larger than the threshold value.
Water 12 01825 g002
Figure 3. The initial curve in the study area.
Figure 3. The initial curve in the study area.
Water 12 01825 g003
Figure 4. Original and ground truth images during (a) normal conditions; (b) overflowed conditions.
Figure 4. Original and ground truth images during (a) normal conditions; (b) overflowed conditions.
Water 12 01825 g004
Figure 5. The comparisons between the segmentation results and the ground truth images when using (a) hybrid technique, (b) region growing, and (c) thresholding.
Figure 5. The comparisons between the segmentation results and the ground truth images when using (a) hybrid technique, (b) region growing, and (c) thresholding.
Water 12 01825 g005
Table 1. Dice score (DSC) for normal and overflowed conditions.
Table 1. Dice score (DSC) for normal and overflowed conditions.
Segmentation MethodNormal Conditions (%)Overflowed Conditions (%)
Hybrid96.7198.68
Region growing94.9098.22
Thresholding77.9997.77
Table 2. Jaccard index for normal and overflowed conditions.
Table 2. Jaccard index for normal and overflowed conditions.
Segmentation MethodNormal Conditions (%)Overflowed Conditions (%)
Hybrid93.6397.39
Region growing90.2096.51
Thresholding63.9295.63

Share and Cite

MDPI and ACS Style

Muhadi, N.A.; Abdullah, A.F.; Bejo, S.K.; Mahadi, M.R.; Mijic, A. Image Segmentation Methods for Flood Monitoring System. Water 2020, 12, 1825. https://doi.org/10.3390/w12061825

AMA Style

Muhadi NA, Abdullah AF, Bejo SK, Mahadi MR, Mijic A. Image Segmentation Methods for Flood Monitoring System. Water. 2020; 12(6):1825. https://doi.org/10.3390/w12061825

Chicago/Turabian Style

Muhadi, Nur Atirah, Ahmad Fikri Abdullah, Siti Khairunniza Bejo, Muhammad Razif Mahadi, and Ana Mijic. 2020. "Image Segmentation Methods for Flood Monitoring System" Water 12, no. 6: 1825. https://doi.org/10.3390/w12061825

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop