Skip to main content
Log in

Detecting Gas Turbine Combustor Anomalies Using Semi-supervised Anomaly Detection with Deep Representation Learning

  • Published:
Cognitive Computation Aims and scope Submit manuscript

Abstract

Deep learning (DL), regarded as a breakthrough machine learning technique, has proven to be effective for a variety of real-world applications. However, DL has not been actively applied to condition monitoring of industrial assets, such as gas turbine combustors. We propose a deep semi-supervised anomaly detection (deepSSAD) that has two key components: (1) using DL to learn representations or features from multivariate, time-series sensor measurements; and (2) using one-class classification to model normality in the learned feature space, thus performing anomaly detection. Both steps use normal data only; thus our anomaly detection falls into the semi-supervised anomaly detection category, which is advantageous for industrial asset condition monitoring where abnormal or faulty data is rare. Using the data collected from a real-world gas turbine combustion system, we demonstrate that our proposed approach achieved a good detection performance (AUC) of 0.9706 ± 0.0029. Furthermore, we compare the detection performance of the proposed approach against that of other different designs, including different features (i.e., the deep learned, handcrafted and PCA features) and different detection models (i.e., one-class ELM, one-class SVM, isolation forest, and Gaussian mixture model). The proposed approach significantly outperforms others. The proposed combustor anomaly detection approach is effective in detecting combustor anomalies or faults.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1.
Fig. 2.
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

Notes

  1. We also tried using more layers of SDAE, but we could not obtain better features in terms of detection performance. Our hypothesis is that 2-layer SDAE is sufficient for capturing normal “patterns” of the 27-dimensional TC profiles concerned in this study.

References

  1. B.L. Ferrell (1999), “JSF Prognostics and Health Management”. Proceedings of IEEE Aerospace Conference. March 6–13, Big Sky, MO. doi: https://doi.org/10.1109/AERO.1999.793190.

  2. C. Allegorico and V. Mantini, “A data-driven approach for on-line gas turbine combustion monitoring using classification models”, 2nd European Conference of the Prognostics and Health Management Society 2014, Nantes, France, July 8–10, 2014.

  3. Arel I, Rose DC, Kamowski TP. Deep machine learning – a new frontier in artificial intelligence research. IEEE Computer Intelligence Magazine. 2014;5(4):13–8.

    Google Scholar 

  4. Li C, Deng C, Zhou S, Zhao B, Huang GB. Conditional random mapping for effective ELM feature representation. Cogn Comput. 2018;10:827–47. https://doi.org/10.1007/s12559-018-9557-x.

    Google Scholar 

  5. Hodge V, Austin J. A survey of outlier detection methodologies. Artif Intell Rev. 2004;22(2):85–126.

    Google Scholar 

  6. D.M.J. Tax, “One-class classification: concept-learning in the absence of counter-examples”, Doctoral Dissertation, University of Delft, The Netherlands, 2001.

  7. Khan SS, Madden MG. A survey of recent trends in one class classification. In: Intelligence A, Cognitive S, Coyle L, Freyne J, editors. Dublin. Ireland: Springer-Verlag; 2010.

    Google Scholar 

  8. Zhang L, Suganthan PN. A survey of randomized algorithms for training neural networks. Inf Sci. 2016;364:146–55.

    Google Scholar 

  9. P. N. Suganthan, “On non-iterative learning algorithms with closed-form solution”, Applied Soft Computing, Sept 2018, DoI: https://doi.org/10.1016/j.asoc.2018.07.013.

    Google Scholar 

  10. Huang GB, Zhu QY, Siew CK, C.K. Extreme learning machine: theory and applications. Neurocomputing. Dec. 2006;70(1–3):489–501.

    Google Scholar 

  11. G.B. Huang, H.M. Zhou, X.J. Ding and R. Zhang , “Extreme learning machine for regression and multiclass classification”, IEEE Trans Syst Man Cybern B Cybern, Vol. 42, No. 2, April 2012, pp. 513–529.

  12. Atli BG, Miche Y, Kalliola A, Oliver I, Holtmanns S, Lendasse A. Anomaly-Based Intrusion Detection Using Extreme Learning Machine and Aggregation of Network Traffic Statistics in Probability Space. Cogn Comput. 2018. https://doi.org/10.1007/s12559-018-9564-y.

    Google Scholar 

  13. W. Huang, N. Li, Z. Lin, G. B. Huang, W. Zong, J. Zhou and Y. Duan, "Liver tumor detection and segmentation using kernel based extreme learning machine," IEEE Conference on Engineering in Medicine and Biology Society (EMBC), vol., no.,pp.3662–3665, 3–7 July 2013.

  14. W. Yan, “One-class extreme learning machines for gas turbine combustor anomaly detection”, 2016 international joint conference on neural networks (IJCNN), Vancouver, BC, 2016. doi:https://doi.org/10.1109/IJCNN.2016.7727567.

  15. Chandola V, Banerjee A, Kumar V. Anomaly detection : A Survey. ACM Comput Surv. 2009;41(3):15.

    Google Scholar 

  16. Pimentel MAF, Clifton DA, Clifton L, Tarassenko L. A review of novelty detection. Sgnal Processing. 2014;99:215–49.

    Google Scholar 

  17. Zimek A, Schubert E, Kriegel H-P. A survey on unsupervised outlier detection in high-dimensional numerical data. Statistical Analysis and Data Mining. 2012;5(5):363–87. https://doi.org/10.1002/sam.11161.

    Google Scholar 

  18. Akoglu L, Tong HH, Koutra D. Graph-based anomaly detection and description: a survey. Data Min Knowl Disc. 2014;28(4):2014 arXiv:1404.4679.

    Google Scholar 

  19. Tax DMJ, Duin RPW. Support vector domain description. Pattern Recogn Lett. 1999b;20:1191–9.

    Google Scholar 

  20. Schölkopf B, Williamson RC, Smola AJ, Shawe-Taylor J, Platt JC. Support vector method for novelty detection. In: Solla SA, Leen TK, Müller K, editors. Advances in Neural Information Processing Systems. Cambridge, MA, USA: MIT Press; 1999.

    Google Scholar 

  21. Clifton L, Clifton DA, Zhang Y, Watkinson P, Tarassenko L, Yin H. Probabilistic novelty detection with support vector machines. IEEE Trans Reliab. 2014;63(2):455–67.

    Google Scholar 

  22. Kemmler M, Rodner E, Wacker E-S, Denzler J. One-class classification with Gaussian processes. Pattern Recogn. 2013;46(12):3507–18.

    Google Scholar 

  23. Désir C, Bernard S, Petitjean C, Heutte L. One class random forests. Pattern Recogn. 2013;46(12):3490–506.

    Google Scholar 

  24. T. Liu, K.M. Ting and Z.H. Zhou, “Isolation Forests”, Proceedings of the 2008 IEEE International Conference on Data Mining. pp. 413–422, 2008.

  25. X. Yang, K. Huang, R. Zhang and J.Y. Goulermas, “A Novel Deep Density Model for Unsupervised Learning”. Cogn Comput (2018). https://doi.org/10.1007/s12559-018-9566-9.

    Google Scholar 

  26. Lishuai L, John Hansman R, Palacios R, Welsch R. Anomaly detection via a Gaussian mixture model for flight operation and safety monitoring. Transportation Research Part C: Emerging Technologies. 2016;64(2016):45–57.

    Google Scholar 

  27. Aguayo L, Barreto GA. Novelty Detection in Time Series Using Self-Organizing Neural Networks: A Comprehensive Evaluation. Neural Process Lett. 2018;47(2):717–44. https://doi.org/10.1007/s11063-017-9679-2.

    Google Scholar 

  28. Q. Leng, H. Qi, J. Miao, W. Zhu, and G. Su, "One-Class classification with extreme learning machine," Mathematical Problems in Engineering, Vol. 2015, Article ID 412957.

  29. Tolani DK, Yasar M, Ray A, Yang V. Anomaly detection in aircraft gas turbine engines. J Aerosp Comput Inf Commun. 2006;3(2):44–51. https://doi.org/10.2514/1.15768.

    Google Scholar 

  30. Zaher A, McArthur SDJ, Infield DG, Patel Y. Online wind turbine fault detection through automated SCADA data analysis. Wind Energy. 12(6):574–93.

    Google Scholar 

  31. F. Xue and W. Yan, “Parametric model-based anomaly detection for locomotive subsystems”, Proceedings of the 2007 International joint conference on neural networks (IJCNN), Orlando, FL, August 12-17, 2007.

  32. Ogbonnaya EA, Ugwu HU, Theophilus Johnson JK. Gas Turbine Engine Anomaly Detection Through Computer Simulation Technique of Statistical Correlation. IOSR Journal of Engineering. 2(4):544–54.

    Google Scholar 

  33. Arranz A, Cruz A, Sanz-Bobi MA, Riuz P, Coutino J. DADICO: Intelligent system for anomaly detection in a combined cycle gas turbine plant. Expert Syst Appl. 34(4):2267–77.

    Google Scholar 

  34. Mukhopadhyay A, Chaudhari RR, Paul T, Sen S, Ray A. Lean blow-out prediction in gas turbine combustors using symbolic time series analysis. J Propuls Power. 2013;29(4):950–60. https://doi.org/10.2514/1.B34711.

    Google Scholar 

  35. Chakraborty S, Gupta S, Ray A, Mukhopadhyay A. Data-driven fault detection and estimation in thermal pulse combustors. Proceedings of the Institution of Mechanical Engineers, Part G: Journal of Aerospace Engineering. 2008;222:1097–108. https://doi.org/10.1243/09544100JAERO432.

    Google Scholar 

  36. Bengio Y, Courville A, Vincent P. Representation learning: a review and new perspectives. IEEE Trans Pattern Anal Machine Intell. 2013;35:1798–828.

    Google Scholar 

  37. Y. Bengio. (2011). Deep learning of representations for unsupervised and transfer learning. In JMLR W&CP: Proc. Unsupervised and Transfer Learning.

  38. Qi Lei, Jinfeng Yi, Roman Vaculin, Lingfei Wu, Inderjit S. Dhillon (2017). Similarity Preserving Representation Learning for Time Series Analysis. arXiv:1702.03584v2.

  39. L. Deng, M. Seltzer, D. Yu, A. Acero, A. Mohamed and G. Hinton (2010). Binary coding of speech spectrograms using a deep auto-encoder. In Interspeech 2010, Makuhari, Chiba, Japan.

  40. Hinton GE, Osindero S, The Y. A fast learning algorithm for deep belief nets. Neural Comput. 2006;18:1527–54.

    PubMed  Google Scholar 

  41. Collobert R, Weston J, Bottou L, Karlen M, Kavukcuoglu K, P. Kuksa. R. Natural language processing (almost) from scratch. J Mach Learn Res. 2011;12:2493–537.

    Google Scholar 

  42. JeroneTA Andrews, Thomas Tanay, Edward J Morton and Lewis D Gri (2016). Transfer representation-learning for anomaly detection. ICML.

  43. Sohaib M, Kim C-H, Kim J-M. A hybrid feature model and deep-learning-based bearing fault diagnosis. Sensors. 2017;17(12):2876.

    Google Scholar 

  44. Jinwon An and Sungzoon Cho. 2015. Variational Autoencoder based Anomaly Detection using Reconstruction Probability. Technical Report. SNU Data Mining Center. 1–18 pages.

  45. Chong Zhou and Randy C Paffenroth. 2017. Anomaly detection with robust deep autoencoders. In proceedings of the 23rd ACM SIGKDD international conference on knowledge discovery and data mining. ACM, 665–674.

  46. G. Jiang, P. Xie, H. He and J. Yan, "Wind turbine fault detection using a Denoising autoencoder with temporal information," in IEEE/ASME Transactions on Mechatronics, vol. 23, no. 1, pp. 89–100, 2018. doi: https://doi.org/10.1109/TMECH.2017.2759301.

    Google Scholar 

  47. Song H, Jiang Z, Men A, Yang B. A hybrid semi-supervised anomaly detection model for high-dimensional data. Computational Intelligence and Neuroscience. 2017;2017:8501683.

    PubMed  PubMed Central  Google Scholar 

  48. B. Zong, Etl. “Deep autoencoding Gaussian mixture model for unsupervised anomaly detection”. ICLR.

  49. Raghavendra Chalapathy, Aditya Krishna Menon, and Sanjay Chawla, (2018). Anomaly Detection using One-Class Neural Networks, KDD’2018, 19–23 August 2018, London, United Kingdom.

  50. Hinton GE, Salakhutdinov RR. Reducing the dimensionality of data with neural networks. Science. 2006;313(5786):504–7.

    CAS  PubMed  Google Scholar 

  51. C. Ieracitano, A. Adeel, M. Gogate, K. Dashtipour, F. C. Morabito, H. Larijani, A. Raza and A. Hussain, “Statistical analysis driven optimized deep learning system for intrusion detection”, arXiv:1808.05633.

  52. H. Lee, A. Battle, R. Raina and A.Y. Ng (2007). Efficient sparse coding algorithms. In NIPS, 2007.

  53. A. Coates, H. Lee and A.Y. Ng. (2011). An analysis of single-layer networks in unsupervised feature learning. In AIS-TATS 14, 2011.

  54. Vincent P, Larochelle H, Lajoie I, Bengio Y, Manzagol PA. Stacked denoising autoencoders: learning useful representations in a deep network with a local denoising criterion. J Mach Learn Res. 2010;11:3371–408.

    Google Scholar 

  55. H. Wang, X. Shi and D.Y. Yeung. (2015). Relational stacked denoising autoencoder for tag recommendation. Proceedings of AAAI ‘15.

  56. Shaheryar A, Yin X-C, Ramay WY. Robust feature extraction on vibration data under deep learning framework: An application for fault identification in rotary machines. Int J Comput Appl. 2017;167(4).

    Google Scholar 

  57. Huang G-B. An Insight into Extreme Learning Machines: Random Neurons, Random Features and Kernels. Cogn Comput. 2014;6:376–90.

    Google Scholar 

  58. M. Anbar, R. Abdullah, B. N. Altamimi and A. Hussain, “A machine learning approach to detect router advertisement flooding attacks in next-generation IPv6 networks.” Cogn Comput, Vol. 10 (2), April 2018, pp. 201–214.

    Google Scholar 

  59. Barreto GA, Frota RA. A unifying methodology for the evaluation of neural network models on novelty detection tasks. Pattern Anal Applic. 2013;16(1):83–97. https://doi.org/10.1007/s10044-011-0265-3.

    Google Scholar 

  60. C-C Chang and C-J Lin, “LIBSVM : a library for support vector machines”, ACM Transactions on Intelligent Systems and Technology, 2:27:1--27:27, 2011. Software available at http://www.csie.ntu.edu.tw/~cjlin/libsvm

  61. D. Erhan, A. Courville & Y. Bengio. (2010). Understanding representations learned in deep architectures. De-partment d’Informatique et Recherche Operationnelle, University of Montreal, QC, Canada, tech. Rep. 1355.

Download references

Acknowledgments

Part of our initial research work was performed in collaboration with Dr. Lijie Yu from GE Power, who provided the combustor data and insightful domain knowledge, which are critical for this study. The author is grateful to Dr. Yu’s support.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Weizhong Yan.

Ethics declarations

Conflict of Interest

The author declares that he has no conflict of interest.

Ethical Approval

This article does not contain any studies with human participants performed by the author.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yan, W. Detecting Gas Turbine Combustor Anomalies Using Semi-supervised Anomaly Detection with Deep Representation Learning. Cogn Comput 12, 398–411 (2020). https://doi.org/10.1007/s12559-019-09710-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12559-019-09710-7

Keywords

Navigation