Next Article in Journal
Electro-Discharge Machining of Zr67Cu11Ni10Ti9Be3: An Investigation on Hydroxyapatite Deposition and Surface Roughness
Next Article in Special Issue
Automatic Implementation of a Self-Adaption Non-Intrusive Load Monitoring Method Based on the Convolutional Neural Network
Previous Article in Journal
Graphene-Based Hydrogen Gas Sensors: A Review
Previous Article in Special Issue
AOC-OPTICS: Automatic Online Classification for Condition Monitoring of Rolling Bearing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Quality 4.0 in Action: Smart Hybrid Fault Diagnosis System in Plaster Production

Faculty of Sciences and Technology and Uninova CTS, NOVA University of Lisbon, Campus de Caparica, 2829-516 Caparica, Portugal
*
Author to whom correspondence should be addressed.
Processes 2020, 8(6), 634; https://doi.org/10.3390/pr8060634
Submission received: 9 April 2020 / Revised: 18 May 2020 / Accepted: 21 May 2020 / Published: 26 May 2020
(This article belongs to the Special Issue Advanced Process Monitoring for Industry 4.0)

Abstract

:
Industry 4.0 (I4.0) represents the Fourth Industrial Revolution in manufacturing, expressing the digital transformation of industrial companies employing emerging technologies. Factories of the future will enjoy hybrid solutions, while quality is the heart of all manufacturing systems regardless of the type of production and products. Quality 4.0 is a branch of I4.0 with the aim of boosting quality by employing smart solutions and intelligent algorithms. There are many conceptual frameworks and models, while the main challenge is to have the experience of Quality 4.0 in action at the workshop level. In this paper, a hybrid model based on a neural network (NN) and expert system (ES) is proposed for dealing with control chart patterns (CCPs). The idea is to have, instead of a passive descriptive model, a smart predictive model to recommend corrective actions. A construction plaster-producing company was used to present and evaluate the advantages of this novel approach, while the result shows the competency and eligibility of Quality 4.0 in action.

1. Introduction, Background, and Problem Statement

1.1. Introduction

In today’s globally complex and competitive business environments, quality is one of the crucial issues for ensuring the success of enterprises [1]. In order to produce with the desired quality and meet the customer’s expectations, production processes need to be monitored to avoid any defect and deviation [2]. Traditionally, statistical process control (SPC) was used as a powerful approach for monitoring and identifying variations manually [1,2]. Developments in manufacturing and information technology enabled SPC to move from merely statistical control to real-time diagnosis purposes with minimum human intervention [3]. Control charts, invented by Shewhart in the 1920s, are essential tools in SPC to assist in controlling the behavior of the process. These tools are used to decide if the process is behaving as intended or in the presence of some unnatural causes of variations. X-bar and R charts are basic Shewhart control charts for drawing a series of process measured data with control limits [3,4]. Process variation emerges from either common causes (natural variations) or specific causes (assignable reasons). Specific causes are those that cause changes and short-term fluctuations, and, if they occur, they destroy the stability of the process, which ought to be known and eliminated as quickly as time permits. Common causes are because of the inherent characteristics of the process, and, if they exist, deviations (background noise) are in control [5,6]. However, the most crucial ability of control charts is detecting various types of patterns consisting of a series of consecutive points that are observed on these charts, which reflects fluctuations in the process [7]. The control chart patterns (CCPs) are generally divided into natural and unnatural patterns. Natural patterns usually exist in the manufacturing process and indicate that the process is statistically stable. As long as the measured data are inside the control limits or only natural random patterns exist, the process is under control. When some measurements fall out of the control limits or the measured data within the control limits signify a non-random pattern, the process is deemed out of control. Unnatural patterns displayed in control charts can be of various types, and each class can be related to specific causes unfavorably influencing the process stability. For example, “Shift” patterns may be related to variations in raw material, supplier, or machine, whereas “Trend” patterns may occur due to gauge wear or environmental changes [1,8]. Different common patterns that regularly emerge in control charts can be found in Figure 1. Over time, various further decision rules such as “zone tests” and “run rules,” including “Western Electric,” “Nelson”, etc., were developed to assist quality control engineers and operators in detecting unnatural CCPs and circumstances leading to a change in the process [9]. Table 1 shows the most recommended rules for the Shewhart control charts to identify abnormal patterns and interpret their characteristic signs in the control chart. In general, the use of run rules can result in quickly signaling a shift in the process. However, the application of all these rules, when no particular cause exists, increases the risk of false alarms (Type I errors) to an unacceptable extent. In addition, run rules do not provide valuable pattern-related information because of a lack of sufficient pattern discrimination capability. Furthermore, control charts do not consider prior knowledge or adequate historical data. Therefore, these decision rules are not particularly useful for CCP recognition [10,11]. Since the analysis of control charts is complicated, because it relies on considerable statistical knowledge, skill, and experience of the practitioners (quality control personnel), developing an efficient automated pattern recognition system that can ensure steady and unbiased analysis of CCPs can compensate for this gap [12].

1.2. Background and Problem Statement

With the development in manufacturing and computing technology, several approaches were proposed using artificial intelligence technologies such as artificial neural networks (ANNs), expert systems (ESs), and fuzzy sets to automatically and intelligently CCP recognition [13]. In the domain of SPC, fast and accurate control, as well as observing the variation of quality characteristics and, consequently, recognition of unnatural patterns, is the primary purpose of each fault detection and diagnosis system. There are numerous studies in this field on CCP recognition that used different machine learning algorithms and other intelligent approaches, namely, K-nearest neighbors (KNN), decision trees (DT), NN-based models, ES-based models, support vector machine (SVM), wavelet-based models, and fuzzy logic [14,15,16]. These approaches aim at extracting meaningful information from a large amount of data to detect instabilities in the process with minimal time and cost and maximum accuracy [17]. To sum up, the most significant approaches are explained briefly in Table 2 by highlighting their advantages and disadvantages.
The literature review shows that ANNs and ESs are the most widely used approaches, being easier to understand and implement and having higher performance in comparison to other CCP recognition approaches mentioned above. NNs are suitable for SPC as they are good at classification and pattern recognition, and they are able to handle the noisy measurements with no requirement for the provision of explicit rules regarding the monitored data [20]. Notably, ESs are useful for quality control applications due to their potential for identifying causes of deviations and recommending preventive and corrective actions [23]. There are two approaches to applying ANNs to CCP recognition: (1) using neural networks (NNs) to detect variation in X-bar and/or R charts, and (2) using NNs to identify unnatural patterns [19]. In this regard, NNs can be classified into two main categories: supervised NNs, involving multilayer perceptron (MLP) and radial basis function (RBF), and unsupervised NNs, including learning vector quantization (LVQ) and adaptive resonance theory (ART) [10]. Among the ANNs, the multilayer perceptron (MLP) was successfully exploited by many researchers in order to address the unnatural CCP recognition problem. Learning vector quantization (LVQ) is a well-applied alternative method to solve the problem of slowness in training the MLP network [4,12,20,27].
The fault diagnosis is an essential issue in SPC, to reduce downtime and disruption cascades that can ensue [24]. In recent years, various diagnostic systems were developed to automate fault diagnosis, but none of them fit our problem in the plaster production process discussed here. Most fault diagnosis approaches in the literature only considered a particular control chart, often X-bar or R (range) chart, to examine the process changes (mean or variance). However, in practice, in many processes, it is required to combine the two charts as multiple assignable causes may occur [28]. On the other hand, identification of unnatural patterns combined with specific knowledge of the process results in a more targeted diagnosis. Unfortunately, none of the CCP recognition models in the literature provide this combination automatically, which can be valuable for diagnostic purposes. Moreover, the performance of the model was not evaluated when developing these approaches in a real case study.
Yet, the common problem reported in these studies is the inability to recognize various single and concurrent CCPs, as well as a high rate of false recognition [4,29]. On the other hand, most applications of NNs and ESs to CCP recognition do not obtain more detailed information about the patterns and their change point (when these patterns are observed on control charts). This information is essential for practical assignable cause analysis and, in turn, accelerates the accomplishment of proper remedial activities [21].
Therefore, in this paper, designing a hybrid fault diagnosis system is proposed using NNs and a rule-based ES to help the quality control personnel in recognizing the roots of deviations, and in taking needed predictive or corrective actions. In the design process, for the structure of the NN, a modular approach comprising an LVQ network and seven multilayer perceptrons (MLPs) is used. Therefore, our work provides a neural expert system in intelligent real-time monitoring and predictive, corrective, and remedial diagnosis of process control in plaster production. To develop the proposed neural expert system (Figure 2), we address the following notable features of the model:
  • Ability to detect various natural and unnatural (single and concurrent) CCPs.
  • Monitoring and analyzing X-bar and R charts abnormalities simultaneously.
  • Capability to estimate nonrandom patterns’ corresponding parameters, different directions, and change points (starting point) in control charts.
  • Identifying the responsible variables on the occurrence of unnatural patterns.
  • Recognizing causes of process instability.
  • Recommending predictive and/or corrective actions in a time of crisis.
The idea is to have, instead of a passive descriptive model, a smart predictive model to assist quality control engineers for the fault diagnosis of the process, particularly from a practical perspective regarding a Quality 4.0 era.

1.3. Contribution to Industry 4.0

Quality “4.0” is a branch of the Industry 4.0 (I4.0) movement associated with the digital transformation process connected with emerging technologies. Quality 4.0 could be defined as the application of Industry 4.0 technologies to quality management methods and tools [30]. According to Reference [31], “Quality 4.0 does not replace traditional quality methods, but rather builds and improves upon them”. This concept covers all issues of advanced quality management in the digital era [32]. For quality (technology, processes, and people), Industry 4.0 enables the transformation of existing capabilities (culture, management, collaboration, and competencies) to drive value [31].
The impact of I4.0 on manufacturing is beyond just the physical production of goods, involving targeting all processes and functions to achieve flexibility, smartness, cost-effectiveness, and resilience. Artificial intelligence and machine learning are among the aforementioned technologies that can be utilized to enhance the quality as the heart of smart manufacturing [12,33,34].
On the other hand, construction projects face different sources of disruptions, as they are time-limited, expert-dependent, and highly influenced by process fluctuations caused by weather conditions, material quality, etc., which leads to a high level of complexity and uncertainty in the construction ecosystem [35]. Industry 4.0 challenged the construction industry ecosystem by demonstrating the construction digitalization potential for real-time data collecting, processing, and sharing tools to enhance alignment between demand and supply [35,36].
SPC is an essential tool to monitor process disruptions, safety assurance, and reliability analysis in construction projects [37]. Industry 4.0, with its automation, connectivity, and digital access capacity, is anticipated to be capable of increasing the efficiency and productivity of SPC. This could happen through enabling intelligent monitoring and diagnosis, automatic tracking of equipment and material, and real-time decision-making, especially in situations where the process is becoming more volatile and complex (Figure 3) [12,30].
Digitalization and automation are the two pillars of smart manufacturing [38]. In this work, our effort was to develop a model based on traditional quality systems driven by disruptive technology. In this paper, artificial intelligence in the form of ANN and ES is employed. This proposes new values for the value chain of the manufacturing system on the factory floor level.
In fact, the innovation associated with digitalization, automation, communication, optimization, and customization of Industry 4.0 concepts and trends allows for real-time analysis and interpretation of production, industries, and service processes to improve quality by detecting failures and justifying possible causes while staying competitive in volatile business environments [30,39]. This work provides just a step to move forward and make the dream of “smart manufacturing” happen under the light of Industry 4.0.
The remainder of this paper is organized as follows: Section 2 concisely outlines the methodology of the research; Section 3 presents the proposed model; Section 4 describes the detailed structure of the model; Section 5 presents a comparative analysis and shows some results from a real case study; finally, Section 6 concludes the paper.

2. Materials and Methods

The methodology of this research is descriptive–experimental research, which is a systematic mapping study based on Reference [40] and an implemented case study in a plaster production plant. Figure 4 represents the schematic diagram depicting the proposed procedure. Using the review of the literature and benchmarking on the extraction of intelligent models used in process control, the structure of a hybrid fault diagnosis system using ANN and ES in the process control of plaster production is presented in this research. Mapping is used to present structuring to synthesize the three main research areas that include statistical process control, neural networks, and expert systems in this research. The case study, based on experiences from model implementation and validation in a plaster production plant, is reflected in Section 4. The plaster production process, which was selected as a case study, is a fluctuating process that has many influencing parameters. On the other hand, because the final production, i.e., construction plaster or so-called “plaster of Paris (PoP)”, is mixed in a silo, the monitoring and modification processes in a short period can prevent the entire silo storage product from crashing. On the contrary, if the process is not monitored with statistical process control over an extended period, non-compliance of part of the product with the standard can crash the entire stored production. For example, in the case of filling more than 10% of the silo from a mismatched product, the whole product in the silo will crash. In order to improve the process quality, a survey was done of experts using a questionnaire and interviews to identify critical control parameters. The “initial setting time” of plaster was detected as the critical parameter of the production process. The initial setting time is dependent on the “crystal water” of baked plaster and ought to last between 7 and 15 min in the intended case study. The acceptance range in our case study was between low (LSL = 5.0) and upper (USL = 5.08) specification limits. The process was deemed in control with the lower and upper control limits of LCL = 5.26 and UCL = 5.56. Then, based on existing records, causes of process failures and defects in construction plaster, which were connected with the plaster’s qualitative characteristics, were examined using a “cause and effect” diagram” [41]. Finally, parameters that could improve customer satisfaction after identifying and prioritizing the foreseeable failure modes were determined and analyzed applying failure mode and effects analysis (FMEA) [42]. The statistical population of this research comprised “PoP,” which is baked at a particular time in the “low burn” kiln and moved from baking salon to storage silo. The sampling method was a stratified random sampling method. Because of the characteristics of the plaster production process and consistent with the background studies, 25 subgroups of n = 125 samples were taken from multiple samples from different shifts. In this research, data were analyzed using three approaches of FMEA, ANN, and discriminant analysis (DA) [1]. To perform discriminant analysis, an understandable database for “SAS” software using Excel software was provided, and discriminant analysis was performed using programming (“Proc Discrim”) in SAS software. For the case study of the present study, the data related to the critical parameter of the process were firstly collected, and the causes of product failure were identified and prioritized. Then, given that the proposed model is an intelligent hybrid model that can learn the patterns from input data (samples) using the power of learning neural networks, data were detected. Finally, the error of identifying training and test datasets was compared with the statistical method of discriminant analysis. In this research, in order to monitor and troubleshoot the process, a model for combining SPC and artificial intelligence was designed using “MATLAB” software. The program codified in MATLAB is able to produce, present, and quickly encode neural network input data, as well as execute expert system rules. The program itself can also perform traditional SPC operations.

3. The Proposed Hybrid Fault Diagnosis Model

Based on what was said earlier, this research is based on the integration of NNs and ESs to provide analysis and interpretation for CCPs. The main focus of this study is to introduce a neural expert system-based pattern identifier, which will allow identifying abnormal patterns in order to correct their assignable causes. The operator will be warned if an abnormal pattern occurs in the process. By replacing human skills with a detection algorithm, human intervention is greatly reduced, and an intelligent manufacturing environment could be achieved. In this study, NNs are used to recognize control chart patterns, and an expert system is also used to interpret the identified pattern and determine the causes of the abnormal pattern. The general model of the research is depicted in Figure 5. As Figure 6 indicates, the proposed system consists of three subsystems:
  • The SPC subsystem controls the traditional statistical process and, using statistical formulas, draws mean and variance for the sampled data of the process. It also sets control limits and determines the capability of the process and, in cases where any point on the charts is out of control, alerts “out of control” mode.
  • The pattern recognition subsystem is accountable for detecting abnormal CCPs. Here, unnatural patterns in the X-bar chart are detected using neural networks, and abnormal patterns in the R chart are identified using “Western Electric” with the rule-based expert system.
  • The reasoning subsystem is responsible for interpreting the purpose of process variations and proposing corrective or preventive actions. In this subsystem, using process-specific knowledge provided as if–then rules in the knowledge base, the cause of the abnormal patterns in the X-bar chart is interpreted. On the other hand, the cause of the unnatural patterns in the R chart is interpreted using general process knowledge, presented as if–then rules in the knowledge base (Figure 6).
Overall, the model design structure can be divided into three stages of neural network creation, expert system development, and integration of neural network and expert system, as explained below.

4. Experimental Results

This section describes the detailed characteristics of the structure of the proposed model and provides the results of their performances.

4.1. Developing the Neural Network Model

In the subsections below, the procedure for simulating normal and abnormal patterns in this research is firstly described. Then, the steps of creating a neural network, including neural network model structure design, neural network training, and neural network model validation, are presented.

4.1.1. Simulation of Unnatural CCPs and Their Corresponding Parameters

In the present research, because of the lack of a large number of useful samples to investigate abnormal CCPs, simulation of the models for training and testing networks was required; however, it was attempted to simulate the data in line with the underlying process data. In the statistical issues, there is a probability distribution function for every random variable based on which the relevant parameters are also determined. Therefore, any natural deviations could be determined according to the probability distribution function of the corresponding random variable [12]. With these explanations, the parameters and functions employed to simulate control chart patterns are presented in Table 3. In this table, the parameters, on the one hand, represent the number of non-random disturbances. Furthermore, they reflect the process improvement during the implementation of recovery programs. In designing the proposed model, it is intended to identify the X-bar chart patterns using NN. The simulator function for the natural variation of the X-bar chart includes normal distribution: x(t) = n(t), and the parameters of this distribution in our case are µ = 5.4 and σ = 0.1. Given these values, the corresponding parameters of the other abnormal patterns in this chart were calculated, as shown in Table 3.

4.1.2. Designing the Structure of the Neural Network Model

In designing the general structure of the NN model, a modular approach was used. The overall model structure consisted of two separate sets of Module I and Module II. In the modular approach, the inputs and outputs of each network can be better managed, and the results of each network performance can be traced.
Module I
Module I was developed to diagnose the behavior of the plaster production process. To this end, the classification power of competing algorithms was used, and a learning vector quantization (LVQ) network was designed to classify input patterns.
o Topology of LVQ Network
In the LVQ network (Figure 7), the connection type between layers is semi-connected, and the input vector, according to the process requirements, includes 25 neurons (25 samples taken from the process). The first layer contains 175 neurons, and the second layer includes eight neurons, while there was no need to consider the term bias. Each of the second-layer neurons represents one of the simulated patterns. Accordingly, neurons # 1 of the natural patterns and other neurons detect abnormal patterns of shift, trend, cycle, systematic, shift + trend, shift + cycle, and trend + cycle, respectively. Due to minimizing the number of outputs, there are patterns in this network, such as “upward shift” and “downward shift,” which represent a pattern with the same equations but values of different parameters. The main criterion for determining the number of subgroups required per class was reducing the incorrect identification of patterns. On the other hand, we attempted to assign almost identical neurons to patterns with the same number of parameters (Table 4).
o Learning Algorithm Used in Module I
The LVQ network in module I was trained by “enabling competition to take a place among the “Kohonen” neurons. The competition is based on the Euclidean distances (di) between the weight vectors (Wi) of these neurons (i) and the input vector (x).
d i = W i   X = ( W i j X j ) 2 .
The neuron which has the least distance is the winner in the competition and is allowed to change its connection weights. The weights of the other neurons remain unchanged. The new weights can be obtained from
Wnew = Wold + λ(XWold),
if the winner neuron is in the correct output category, or
Wnew = Woldλ(XWold),
if the winner is in the wrong category [18,27].
In the above equations, λ is the learning rate, which decreases monotonically with the number of iterations. In this research, λ = 0.01 was considered. Appendix A provides the Matlab code.
Module II
Module II in the proposed model was formulated to estimate the parameters of unnatural patterns of process control diagrams and estimation of the change point of the unnatural patterns.
o Topology of MLP Networks
In Module II, seven multilayer perceptron networks or MLPs do basic (single) and concurrent (mixture) pattern analysis. In this module, each of the networks in this set performs only the interpretation of one of the abnormal patterns. In these networks, the main parameters are estimated based on the definitions set out in Table 3. Moreover, in Module II, conditions for estimating the change point of abnormal behaviors in control charts are provided. In MLP networks (Figure 8), the type of layer connection is fully connected, and the number of inputs to all MLP networks is 26, 25 of which are the number of input neurons and one of which is the network bias which is equivalent to 1 for all simulated data. The number of hidden-layer neurons is optimized such that, for a certain amount of error for MLP on all network networks, the number of iterations required to achieve the error desired was calculated. Then, the number of neurons with the least number of repetitions until the desired error is chosen as the number of optimal neurons (Table 5). In each output layer, the neuron is embedded considering the number of related parameters of every pattern. In order to approximate different orientations of process changes, for example (upward or downward) given the outputs in the interval [−1, 1], a bipolar sigmoid function with the A = 0.1 constant is used. The relationship of the desired transfer function can be seen below.
g ( x ) = 1 e x 1 + e x   x = A . net .
o Learning Algorithm Used in Module II
The training method in MLP networks is “backpropagation with an adaptive learning rate, where the weight of each layer, by the output and output derivative, is corrected until the network is fully trained”. In this study, the training dataset is applied to corresponding networks in a category form, and errors are calculated at each step until the learning process is performed. Learning rate (λ) also changes according to the following command, so that the E(t) is the network error at the time step t:
λ = { 0.99 λ ( t 1 )       i f         E ( t ) E ( t 1 ) 0.01 λ ( t 1 )        i f         E ( t ) > E ( t 1 ) .
There is a condition of training stop on the network error, which tries to minimize the error square between network outputs and the objective function using the gradient descent method. The network error, which is a cumulative error, is defined below in which p stands for pattern number, o represents the output neurons, dij is desired value for the j output of i pattern, and o_ij is the actual output of the network for the i pattern [43]. Appendix B provides the source code written in Matlab.
E = 1 2 i = 1 p j = 1 o ( d i j o i j ) 2 .
o Change Point of Unnatural Patterns
Estimation of the change point (starting point) and subsequent length of the unnatural pattern sequence expressed when the problem started and how long it lasted can help discover the causes of disorders. Since the neural network gives the change point of the abnormal patterns as a Module II output (MLP network) when generating training data, the change point is randomly generated between one and 10, and it is trained on the network to estimate its value as the network output. It should be noted that, in Module I, a fixed number is assumed to be the starting point for each pattern. In opting for the assumed fixed number as the change point, the conditions for pattern formation are considered with regard to its parameters. For example, since the cycle pattern has a period parameter (T = 8–12), the starting point should be earlier.

4.1.3. Neural Network Training

Training examples were introduced randomly to the NN. Before training, connection weights were generated with small random values. Weights were adjusted by training and presenting each pattern to the network. A maximum of 200,000 repetitions was considered as the stopping criterion for training. During the training phase, a series of vectors are provided to the network. The training vector consists of two sub-vectors: An input pattern and a target pattern. There are a total of 33 values for each training vector. A series of 25 coded observations, called the input pattern, is presented to the input layer, and the target pattern, which has an integer output of its inputs, is presented to the output layer (Table 6). Since each pattern has two orientations of changes (e.g., positive and negative shift), the desired output is set to 1 or −1. An output of 1 corresponds to a positive change, and an output of −1 corresponds to a negative change. For example, the output vectors of the “natural” pattern would be [10000000] and downward shift [0–1000000].
Training dataset
In this study, the simulated data for the neural network model were divided into two subsets of training data and test data. Since there was no prior knowledge of the relative importance of unnatural patterns here, the training set contained approximately an equal number of training data for each type of pattern. In total, there were 11,000 training samples in the study set. The total number of training data in the LVQ network was 4000 samples, which equally considered 500 for each pattern. The total number of training samples for MLP networks was 7000, in which the same amount of 1000 samples was generated and applied for each of the seven MLP networks. In order to produce the training dataset with the specifications mentioned above, a program was codified, which was capable of producing an unlimited number of natural and unnatural patterns with different parameters (for example, with different mean and standard deviations).
o Dataset scaling
In order to scale the dataset, the upper and lower boundaries of the input data were firstly specified by matching the maximum and minimum values of the input parameters. Then, the desired data were scaled and expanded into fitting values according to the type of the used transfer functions. Using the scaling method, all of the above operations were performed with the program written in Matlab. In the given formula, A is the “original value”, Ascale is the “normalized value”, Amin is the “minimum observable value”, and Amax is the “maximum observable value”. Amin and Amax might be estimated depending on the nature of data.
A s c a l e = m i n + m a x m i n A m a x A m i n ( A A m i n ) .
In the current application, the intervals used to scale the inputs of MLPs, considering the maximum and minimum ranges for the parameters of the unnatural patterns (±σ3) and given the mean (μ = 5.4) and standard deviation (σ = 0.1), were [5.1, 5.7], which were scaled with confidence intervals of [3.4, 7.4]. In the LVQ network, the data scaling range (training and testing) was [−5, +5]. In this study, all input and output data of MLP networks were scaled in a [−1, +1] interval; however, before the output values were scaled in the [−1, +1] interval, each parameter was scaled to the separate maximum and minimum values. The different scaling values corresponding to the outputs of the MLPs are visible in Table 6. The maximum cumulative error (MCE) for training the LVQ was 0.047 (188 in 4000 training data), and that for testing was 0.0525. Table 7 shows the MCE and training iteration of each MLP network.

4.1.4. Neural Network Test

After training the network with the training dataset, the network was evaluated by the test dataset. In the training phase, network efficiency was increased by minimizing errors between actual outputs. In the test phase, solely the input vector was given to the network, where, in this case, the network was validated by predicting the response values for input and output.
Module I, Evaluation of LVQ Network
For each input vector, the LVQ network decides about the production process situation. Therefore, a chance of errors in decision-making issues will arise. In case the network incorrectly recognizes the natural variation of the process abnormally, it commits a type I error. If it does not recognize the abnormal pattern in the process, a type II error takes place. An incorrect identification error occurs when random deviations cause the basic patterns in the early parts of the formation to have similar behavioral characteristics. The same will apply to concurrent patterns. As each random pattern warns of a particular disturbance in the process, incorrect pattern recognition has different costs. Moreover, if a basic abnormal pattern is identified in the form of a concurrent pattern comprising a basic pattern, it is considered indirect identification. On the other hand, if only one of the unnatural patterns is identified during the simultaneous occurrence of two abnormal patterns, the identification is putatively incomplete. The performance of Module I was measured according to the instructions and definitions performed by 400 test vectors, where each of them represents 25 samples of the plaster production process, which represents one of the eight patterns identified by the neural network. We applied each of the samples as input to the network and then compared the network response with the target response and calculated the network error rate. Table 7 presents the merged results for the 400 test vectors. As can be seen in the table below, the maximum LVQ network error in pattern recognition was 0.052 (21 in 400 data), which demonstrates that the proposed model was successful and effective due to the variety of trained patterns in the identification problem.
Module II, Evaluation the MLP Networks
One of the important issues in neural network training is the overfitting problem of the training data. To put it bluntly, the network learns data very well, and it even remembers the noise in the data (disturbances)—excessive compliance—but it has serious problems identifying and generalizing new data [26]. To solve the problem, when the test data error increases while maintaining or decreasing errors related to training data, the training is stopped, and the final parameters are considered with the minimum error of the test data. The performance of the MLP networks in Module II was examined with numerous examples, and the results were satisfactory. As seen in Table 8, the calculated cumulative error of each MLP network was less than 0.02, which indicates that Module II was successful in identifying the parameters.

4.2. Expert Systems

In designing the general framework for the proposed expert system, a rule-based approach was used. ES assists quality control engineers, and it can be used for training operators as well. Therefore, the proposed system runs in three modes: A tutorial mode that offers explanation and training if requested by the user, a status mode that concludes from the evidence and responses provided by the user, and a diagnosis mode that provides inference or reasoning with the rules within the knowledge base.

4.2.1. Knowledge Acquisition

“The knowledge acquisition process includes extracting, transforming, and validating expertise from different information sources for developing a knowledge base repository” [23]. The knowledge used in this research consists of “general knowledge” and “process-specific knowledge”. In this study, to assist the fault diagnosis process, “Western Electric” [44] tests were utilized as general knowledge, and, to gather process-specific knowledge, “cause and effect (Ishikawa diagram)” [41] diagrams were prepared to investigate the root-cause. Using cause and effect diagrams, the most problematic reasons in the plaster production process were systematically determined (Figure 9). Next, failure modes and effects analysis (FMEA) (Table 9) was applied as an analytical method that incorporates the technology and experts’ knowledge in identifying and prioritizing foreseeable failure modes of the process in order to eliminate or reduce their occurrence [42]. Finally, FMEA analysis results were collected in the knowledge base with the aim of making a complete reference for future issues. As can be seen in Table 9, FMEA uses the risk priority number (RPN) to evaluate the risk level of the process. RPN is calculated by multiplying the scores of three risk factors named occurrence (O), which is the frequency of the failure, severity (S), which is related to the effect of disruption on the system, and detection (D), which refers to the probability of detecting the failure. FMEA uses five scales or scores (1–5) to measure these factors.

4.2.2. Knowledge Representation

In this study, a rule-based approach was considered to codify experts’ problem-solving knowledge through inference rules: IF <a condition or premise>, THEN <an action or conclusion> rules. In total, 60 rules were used using technical documentation, operations procedures, and interviews with experts, for interpreting control charts (X-bar and R) and providing diagnostic expertise.

4.2.3. Implementation

In this study, the desired expert system was designed using three principal modules. The first module is related to knowledge base development, the second module is relevant to interface design and required questions to reach the answer, and the third module is associated with system run and dialogue to the user. Figure 10 shows a schematic of the proposed neural expert system and its components. Below is a brief description of each system component.
  • A knowledge base uses the knowledge of experts and other sources acquired by knowledge engineers to support reliable, complete, and consistent decision-making in a time-critical situation. The knowledge base of the proposed model is an organized collection of facts and heuristics about the plaster production domain, as described briefly below.
    o
    “Facts” refers to a set of facts relating to the current process state extracted by a knowledge engineer (KE) from the records of the quality management system, preventive maintenance, calibration, brainstorming sessions, and interviews with experts.
    o
    “Procedures” focuses on manuals, standards, and procedures. Some examples include technical operation instructions, plaster production standards, and intelligent statistical process control (ISPC) tutorials.
“Rules” relate to production rules that represent inferential knowledge learned from experts. In the knowledge base of the proposed model, 60 rules were employed that were extracted from interview with the experts, documents including “Western Electric” tests (general knowledge of the process), and the NN’s response (case-specific data) extracted by KE and presented in the form if–then. Below is an example of a typical rule, based on specific knowledge of a process.
IF “diagnosis” is “upward trend,”
AND “failure mode” is “increase of crystal water,”
AND “process index” is “decrease of kiln’s temperature,”
THEN “specific cause” can be “clogged fuel nozzle,”
AND “corrective actions” can be either “cleaning the fuel nozzle, the establishment of preventive maintenance (PM) for the burner, or installing fuel filter.”
  • The inference engine contains the inference strategies and matches the condition part of rules against facts of a specific case to reach a decision or conclusion. A backward chaining inference engine was used, as it is best suited for diagnosis-type systems, in which the codified program is executed with two groups of rules. The first group defines goals (assumed conclusions) for the properties and checks if their values are supported by the existing data. The second group updates the rules and transmits satisfying goals.
  • The working memory acts as a repository for all data including the initial facts of the given case, the user’s responses to system queries, and generated facts (e.g., type, change point, and parameters) derived by the inference engine.
  • The user interface facilitates communication between the user and the proposed expert system through various input methods including dialog boxes and command prompts.

5. Comparative Analysis and Case Study

5.1. Comparison Study

To decisive the accuracy, consistency, and repeatability of test results, neural network model verification was done. The NN model was verified by comparing the error of the NN algorithm and the error rate of the discriminant analysis (DA) method—a classification counterpart in the statistical approach [1]. The method of DA is to find a rule to separate two or more groups of observations from one another. The most important application of DA is classification. The output of the DA for the test dataset is presented in Table 10.
For example, in Table 10, the number 0 in the first row and the eighth column indicates that no “natural” pattern was mistakenly placed in the systematic patterns class. In the first row and the first column, the number 17 represents the number of patterns correctly assigned to the “natural” type. Furthermore, the value of 35.42 is the percentage correctly assigned to the “natural” class. In the first row and second column, 15 is the number of “natural” patterns that were mistakenly classified as “shift.” The value of 31.25 is also the percentage of the “shift” pattern error in the “natural” class. The value of 31.25 is also the percentage of the “shift” pattern error in the “natural” class. Table 11 shows the errors for each class. This table lists the errors for each category in the “rate” line and the weight for each type in the “priors” row, while “total” (0.3325) indicates the total error of DA method for the test dataset. Table 12 and Table 13 provide the output for the training dataset. As shown in the diagrams below (Figure 11 and Figure 12), the NN outperformed the DA method in terms of performance and accuracy.

5.2. Case Study

In this section, to demonstrate the applicability and capability of the proposed model, a case study in a plaster-producing company is presented. In a traditional statistical process control system, after gathering the data, the following steps are done:
  • Plotting the sequence of process measurements (observations).
  • Setting UCL and LCL.
  • Determining the process capability (Cpk).
  • Performing the “normality test” on the data.
  • Interpreting both R and X-bar chart for statistical control.
The proposed ISPC model, designed in Matlab, can adequately perform the above operations (Figure 13). Here, pursuant to the plaster-producing experts’ opinion, if “Cpk > 1,” the chart is considered as the baseline for interpreting the process. As can be seen in the ISPC implementation flow chart, after collecting the process data, the baseline chart should be set by eliminating and replacing points beyond the control limits of new data. As shown in Figure 14, the normality test was done, where the normality assumption was valid, and “Cpk > 1” in control mode. After drawing the baseline chart, the actual data of the process were inserted and checked and analyzed by the desired control charts. As illustrated in Figure 15, although there is no “out of control” mode within the “R-chart,” the process was unable to meet specifications due to “Cpk < 1”, being equal to 0.81 (Figure 16). On the other hand, by choosing the “X-bar chart” (Figure 17), the user receives the following error message: “X-bar chart is out of control” (Figure 18). Then, the ES using “Western Electric tests” announces that “out of control” modes may have the following reasons: “carelessness in the measurement, machinery stop, or off-spec materials”. Later, the user receives a suggestion message from ES to check the unnatural patterns identified by NN (Figure 19). As can be seen in Figure 20, not only was the “downward shift pattern” in the “X-bar chart” identified by the NN, but the “starting point” of the unnatural pattern was estimated (point 6). The “shift magnitude parameter” (−0.161) was also determined. In this scenario, because of the appearance of a “downward shift pattern” and based on user observation, which was “kiln body scarlet,” the reason for the deviation was recognized as the “temperature exchange of kiln with the environment due to the loss of refractory and thickness.” “Establishment of maintenance and inspection of refractory” was also recommended as corrective or preventive activities. In this scenario, by making corrective actions and following re-sampling the process (Figure 21), “out of control” modes did not appear in the control charts anymore (Figure 22) and, furthermore, “process capability” increased from Cpk = 0.81 to Cpk = 1.15 (Figure 23). The experimental results show that corrective actions could significantly contribute to process recovery. Thus, the proposed fault diagnosis system could be used to support decision-makers of the plaster production.

6. Conclusions

This paper aimed to target one of the most challenging subjects in smart manufacturing, which is the quality control at the shop floor level considering emerging technologies. There are many conceptual models and general recommendations when discussing a new paradigm of quality associated with I4.0, but there are relatively few works in action. The hybrid model proposed in this work supports the troubleshooting of the plaster production process, which is a complex manufacturing system. To have both descriptive and prescriptive approaches, NN and ES were integrated where NN deals with the determination of fault areas, and ES provides the recommendation of corrective actions.
The main achievements and contributions of this work are as follows:
  • Successful implementation of Quality 4.0 to blend traditional quality control models based on CCPs with an intelligent system at the shop floor level.
  • According to Table 3 and Table 5, the diagnosis of behavioral patterns coming from Module I is acceptable, and parameters of corresponding patterns estimated by Module II are effective and reliable.
  • It was a multitask project including production, delivery, and encryption of neural network input.
  • Using a wide range of data in training the NNs to assure stable behavior in performance.
  • Using the integrated system, most SPC requirements, such as drawing of basis chart, checking X-bar and R charts for being under control, and calculating Cpks, were achieved.
  • Using LVQ for pattern classification and MLP in parallel. This helps in simultaneously enjoying the competitive power of the LVQ network and the interoperability of multilayer perceptron networks.
  • The result of the case study shows the improvement of process capability, while control charts did not show any out of control mode after following the corrective actions; thus, the capability of the proposed model to serve as a reliable decision support system (DSS) was confirmed.
This paper shows the capability of I4.0 to change the quality paradigm in factories of the future. The key element is the level of intelligence of the system, which leads to smart manufacturing. There is no doubt that emerging technologies will shift quality processes to a different level, while monitoring, fault detection, cause, root analysis, and even corrective actions and strategies would be autonomous.
For further research, there are many potential areas of working, as outlined below.
  • Developing and comparing the result of new models based on adaptive neuro-fuzzy inference systems (ANFIS).
  • Using the particle swarm optimization (PSO) to improve the performance.
  • Using some techniques and algorithms such as deep learning to increase the efficiency of the model.
  • Using collective sensor networks and Internet of things (IoT) platforms to develop a real-time smart quality control system.
  • Connecting smart quality to new services such as predictive maintenance to achieve smart, collaborative devices and support new product and production lines based on information from intelligent quality information.
The models proposed in this paper were independent of software; thus, free software and modern scripting languages such as Python, Ruby, etc. could be utilized for the same purpose.
This paper was a real case of Quality4.0 in action to show the capabilities and applicability of emerging technologies and intelligent algorithms to shift control quality to the new stage, and it represents the initial step of a long journey.

Author Contributions

Conceptualization, J.R.; methodology, J.R. and J.J.; writing—drafting Section 2, J.R. and J.J.; writing—drafting Section 3, J.R. and J.J.; writing—drafting Section 4, J.R.; writing—drafting Section 5, J.R.; writing—drafting Section 6, J.J.; writing—reviewing and editing, J.R. and J.J. All authors read and agreed to the published version of the manuscript.

Funding

This research was funded in part by the Portuguese "Fundação para a Ciência e a Tecnologia" (FCT) in the context of the Center of Technology and Systems CTS/UNINOVA/FCT/NOVA, reference UIDB/00066/2020.

Acknowledgments

This work was supported by the Portuguese Foundation for Science and Technology (FCT) and the Center of Technology and Systems (CTS).

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

for l = 1:Ntraining;
for i = 1:Nhidden
for j = 1:Ninput
D(i,j) = (Data(l,j)-W(j,i))^2;
end
Distance(i,:) = [sqrt(sum(D(i,:),2)),i];
end
MinDis = min(Distance(:,1));
ONN = find(Distance(:,1) == MinDis);
for i = 1:Nhidden
if i == ONN
HiddenOut(i) = 1;
else
HiddenOut(i) = 0;
end
end
NetOut(l,:) = HiddenOut*V;
if NetOut(l,:) == RealOut(l,:)
W(:,ONN) = W(:,ONN) + Lambda*(transpose(Data(l,:))-W(:,ONN));
else
W(:,ONN) = W(:,ONN)-Lambda*(transpose(Data(l,:))-W(:,ONN));
end
Er(l) = sum(abs(RealOut(l,:)-NetOut(l,:)),2);
end
t = t + 1;
Error(t) = sum(Er,2)
if Error(t) < 500
Lambda = 0.99*Lambda;
End

Appendix B

while Error(l) > Emax
l = l + 1;
Epoch(l) = l;
HO = V0*transpose(InData);
HiddenOut = (1-exp(-A*HO))./(1 + exp(-A*HO));
DiffHiddenOut = (A/2)*(1-HiddenOut.*HiddenOut
HiddenOut(Nshifthidden,:) = 1;
DiffHiddenOut(Nshifthidden,:) = (A/2)*(1-HiddenOut(Nshifthidden,:).*HiddenOut(Nshifthidden,:));
OO = W0*HiddenOut;
Output = (1-exp(-A*OO))./(1 + exp(-A*OO));
DiffOutput = (A/2)*(1-Output.*Output);
DeltaO = (transpose(OutData)-Output).*DiffOutput;
DeltaH = DiffHiddenOut.*transpose(transpose(DeltaO)*W0);
E = (OutData-transpose(Output)).*(OutData-transpose(Output));
for i = 1:Nshifthidden
for j = 1:Nshiftinput
V(i,j) = V0(i,j) + Lambda*DeltaH(i,:)*InData(:,j);
end
end
for i = 1:Nshiftoutput
for j = 1:Nshifthidden
W(i,j) = W0(i,j) + Lambda*DeltaO(i,:)*HiddenOut(j,:)’;
end
end
V0 = V;W0 = W;
Error(l) = 0.5*sum(sum(E,1),2);

References

  1. Bag, M.; Bengal, W.; Gauri, S.K.; Bengal, W.; Chakraborty, S.; Bengal, W. Recognition of Control Chart Patterns using Discriminant Analysis of Shape Features. In Proceedings of the International Conference on Industrial Engineering and Operations Management, Dhaka, Bangladesh, 9–10 January 2010; pp. 88–93. [Google Scholar]
  2. Lu, C.; Shao, Y.E.; Li, C. Recognition of Concurrent Control Chart Patterns by Integrating ICA and SVM. Appl. Math. Inf. Sci. 2014, 8, 681–689. [Google Scholar] [CrossRef]
  3. Awadalla, M.; Sadek, M.A. Spiking neural network-based control chart pattern recognition. Alex. Eng. J. 2012, 51, 27–35. [Google Scholar] [CrossRef] [Green Version]
  4. Ebrahimzadeh, A.; Ranaee, V. Control chart pattern recognition using an optimized neural network and efficient features. ISA Trans. 2010, 49, 387–393. [Google Scholar] [CrossRef] [PubMed]
  5. Demirli, K.; Vijayakumar, S. Fuzzy logic based assignable cause diagnosis using control chart patterns. Inf. Sci. 2010, 180, 3258–3272. [Google Scholar] [CrossRef] [Green Version]
  6. Noskievičová, D. Complex Control Chart Interpretation. Int. J. Eng. Bus. Manag. 2013, 5, 1–7. [Google Scholar] [CrossRef]
  7. Lavangnananda, K.; Sawasdimongkol, P. Neural Network Classifier of Time Series: A Case Study of Symbolic Representation Preprocessing for Control Chart Patterns. In Proceedings of the 2012 8th International Conference on Natural Computation, Chongqing, China, 29–31 May 2012; pp. 344–349. [Google Scholar]
  8. Hachicha, W.; Ghorbel, A. A survey of control-chart pattern-recognition literature (1991–2010) based on a new conceptual classification scheme. Comput. Ind. Eng. 2012, 63, 204–222. [Google Scholar] [CrossRef]
  9. Haghtalab, S.; Xanthopoulos, P.; Madani, K. Expert Systems with Applications A robust unsupervised consensus control chart pattern recognition framework. Expert Syst. Appl. 2015, 42, 6767–6776. [Google Scholar] [CrossRef]
  10. Wang, C.; Guo, R.; Chiang, M.; Wong, J.Y. Decision tree based control chart pattern recognition. Int. J. Prod. Res. 2008, 46, 4889–4901. [Google Scholar] [CrossRef]
  11. Guh, R. Simultaneous process mean and variance monitoring using artificial neural networks. Comput. Ind. Eng. 2010, 58, 739–753. [Google Scholar] [CrossRef]
  12. Ramezani, J.; Jassbi, J. A hybrid expert decision support system based on artificial neural networks in process control of plaster production—An industry 4.0 perspective. In Technological Innovation for Smart Systems; IFIP AICT; Springer: Cham, Switzerland, 2017; pp. 55–71. [Google Scholar] [CrossRef] [Green Version]
  13. Hassan, A. An improved scheme for online recognition of control chart patterns. Int. J. Comput. Aided Eng. Technol. 2014, 3, 309–321. [Google Scholar] [CrossRef]
  14. Das, P.; Banerjee, I. A hybrid detection system of control chart patterns using cascaded SVM and neural network–based detector. Neural Comput. Appl. 2011, 20, 287–296. [Google Scholar] [CrossRef]
  15. Demircioglu Diren, D.; Boran, S.; Cil, I. Integration of Machine Learning Techniques and Control Charts for Multivariate Processes. Sci. Iran. 2019. [Google Scholar] [CrossRef] [Green Version]
  16. Lavangnananda, K.; Khamchai, S. Capability of control chart patterns classifiers on various noise levels. Procedia Comput. Sci. 2015, 69, 26–35. [Google Scholar] [CrossRef] [Green Version]
  17. Fuqua, D.; Razzaghi, T. A cost-sensitive convolution neural network learning for control chart pattern recognition. Expert Syst. Appl. 2020, 150, 113275. [Google Scholar] [CrossRef]
  18. Biehl, M.; Hammer, B.; Villmann, T. Prototype-based models in machine learning. WIRE Cogn. Sci. 2016, 7, 92–111. [Google Scholar] [CrossRef] [PubMed]
  19. El-midany, T.T.; El-baz, M.A.; Abd-elwahed, M.S. A proposed framework for control chart pattern recognition in multivariate process using artificial neural networks. Expert Syst. Appl. 2010, 37, 1035–1042. [Google Scholar] [CrossRef]
  20. Gauri, S.K. Control chart pattern recognition using feature-based learning vector quantization. Int. J. Adv. Manuf. Technol. 2010, 48, 1061–1073. [Google Scholar] [CrossRef]
  21. Ghiasabadi, A.; Noorossana, R.; Saghaei, A. Identifying change point of a non-random pattern on control chart using artificial neural networks. Int. J. Adv. Manuf. Technol. 2013, 67, 1623–1630. [Google Scholar] [CrossRef]
  22. Bayat, A.B.; Gharehkhani, A.; Mohajeran, A.; Addeh, J. Control Chart Patterns Recognition Using Optimized Adaptive Neuro-Fuzzy Inference System and Wavelet Analysis. J. Eng. Technol. 2013, 3, 76–81. [Google Scholar]
  23. Bag, M.; Gauri, S.K.; Chakraborty, S. An expert system for control chart pattern recognition. Int. J. Adv. Manuf. Technol. 2012, 62, 291–301. [Google Scholar] [CrossRef]
  24. Xanthopoulos, P.; Razzaghi, T. Computers & Industrial Engineering recognition q. Comput. Ind. Eng. 2014, 70, 134–149. [Google Scholar]
  25. Xie, L.; Gu, N.; Li, D.; Cao, Z.; Tan, M.; Nahavandi, S. Computers & Industrial Engineering Concurrent control chart patterns recognition with singular spectrum analysis and support vector machine. Comput. Ind. Eng. 2013, 64, 280–289. [Google Scholar]
  26. Lin, S.; Guh, R.; Shiue, Y. Effective recognition of control chart patterns in autocorrelated data using a support vector machine based approach. Comput. Ind. Eng. 2011, 61, 1123–1134. [Google Scholar] [CrossRef]
  27. Zafar, R.F.; Mahmood, T.; Abbas, N.; Riaz, M.; Hussain, Z. A progressive approach to joint monitoring of process parameters. Comput. Ind. Eng. 2018, 115, 253–268. [Google Scholar] [CrossRef]
  28. Villmann, T.; Bohnsack, A.; Kaden, M. Can learning vector quantization be an alternative to SVM and deep learning? J. Artif. Intell. Soft Comput. Res. 2017, 7, 65–81. [Google Scholar] [CrossRef] [Green Version]
  29. Zarandi, M.H.F.; Alaeddini, A. A general fuzzy-statistical clustering approach for estimating the time of change in variable sampling control charts. Inf. Sci. 2010, 180, 3033–3044. [Google Scholar] [CrossRef]
  30. Radziwill, N. Let’s Get Digital: The many ways the fourth industrial revolution is reshaping the way we think about quality. Qual. Prog. 2018, 24–29. [Google Scholar]
  31. LSN Research. Quality 4.0 Impact and Strategy Handbook eBook. blog.lnsresearch.com. 2017. Available online: https://blog.lnsresearch.com/quality40ebook (accessed on 10 May 2019).
  32. Nenadál, J. The New EFQM Model: What is Really New and Could Be Considered as a Suitable Tool with Respect to Quality 4.0 Concept? Qual. Innov. Prosper. 2020, 24, 17–28. [Google Scholar] [CrossRef] [Green Version]
  33. Madsen, D.Ø. The Emergence and Rise of Industry 4.0 Viewed through the Lens of Management Fashion Theory. Adm. Sci. 2019, 9, 71. [Google Scholar] [CrossRef] [Green Version]
  34. Ramezani, J.; Camarinha-Matos, L.M. A collaborative approach to resilient and antifragile business ecosystems. Procedia Comput. Sci. 2019, 162, 604–613. [Google Scholar] [CrossRef]
  35. Dallasega, P.; Rauch, E.; Linder, C. Industry 4.0 as an enabler of proximity for construction supply chains: A systematic literature review. Comput. Ind. 2018, 99, 205–225. [Google Scholar] [CrossRef]
  36. Maskuriy, R.; Selamat, A.; Ali, K.N.; Maresova, P.; Krejcar, O. Industry 4.0 for the Construction Industry—How Ready Is the Industry? Appl. Sci. 2019, 9, 2819. [Google Scholar] [CrossRef] [Green Version]
  37. Ault, J.H.; Jenkins, J. Control Charts as a Productivity Improvement Tool in Construction. Master’s Thesis, Purdue University, West Lafayette, Indiana, 2013. [Google Scholar]
  38. Camarinha-Matos, L.M.; Fornasiero, R.; Ramezani, J.; Ferrada, F. Collaborative Networks: A Pillar of Digital Transformation. Appl. Sci. 2019, 9, 5431. [Google Scholar] [CrossRef] [Green Version]
  39. Ramezani, J.; Camarinha-Matos, L.M. Novel Approaches to Handle Disruptions in Business Ecosystems. In Technological Innovation for Industry and Service Systems; DoCEIS 2019; IFIP AICT; Springer: Cham, Switzerland, 2019; pp. 43–57. [Google Scholar]
  40. Petersen, K.; Vakkalanka, S.; Kuzniarz, L. Guidelines for conducting systematic mapping studies in software engineering: An update. Inf. Softw. Technol. 2015, 64, 1–18. [Google Scholar] [CrossRef]
  41. Tague, N.R. The Quality Toolbox, American Society for Quality; Quality Press: Milwaukee, WI, USA, 2008. [Google Scholar]
  42. Lipol, L.S.; Hag, J. Risk Analysis Method: FMEA/FMECA in the Organizations. Int. J. Basic Appl. Sci. 2011, 11, 5. [Google Scholar]
  43. Ramchoun, H.; Idrissi, M.A.J.; Ghanou, Y.; Ettaouil, M. New modeling of multilayer perceptron architecture optimization with regularization: An application to pattern classification. IAENG Int. J. Comput. Sci. 2017, 44, 261–269. [Google Scholar]
  44. Electric, W. Statistical Quality Control Handbook; Western Electric Company: New York, NY, USA, 1956. [Google Scholar]
Figure 1. Typical patterns in control charts.
Figure 1. Typical patterns in control charts.
Processes 08 00634 g001
Figure 2. Combination of NN and ES: A neural expert system.
Figure 2. Combination of NN and ES: A neural expert system.
Processes 08 00634 g002
Figure 3. Quality 4.0: integration of traditional statistical process control (SPC) with Industry 4.0.
Figure 3. Quality 4.0: integration of traditional statistical process control (SPC) with Industry 4.0.
Processes 08 00634 g003
Figure 4. The flow diagram of the study procedure.
Figure 4. The flow diagram of the study procedure.
Processes 08 00634 g004
Figure 5. The structure of the neural expert system.
Figure 5. The structure of the neural expert system.
Processes 08 00634 g005
Figure 6. (a) SPC subsystem; (b) pattern recognition subsystem; (c) reasoning subsystem.
Figure 6. (a) SPC subsystem; (b) pattern recognition subsystem; (c) reasoning subsystem.
Processes 08 00634 g006aProcesses 08 00634 g006b
Figure 7. Learning vector quantization (LVQ) network.
Figure 7. Learning vector quantization (LVQ) network.
Processes 08 00634 g007
Figure 8. MLP network for shift pattern.
Figure 8. MLP network for shift pattern.
Processes 08 00634 g008
Figure 9. Cause and effect diagram: (a) for increase of crystal water; (b) for decrease of crystal water.
Figure 9. Cause and effect diagram: (a) for increase of crystal water; (b) for decrease of crystal water.
Processes 08 00634 g009aProcesses 08 00634 g009b
Figure 10. The proposed hybrid fault diagnosis system scheme or ISPC.
Figure 10. The proposed hybrid fault diagnosis system scheme or ISPC.
Processes 08 00634 g010
Figure 11. Comparison of DA and neural network (NN) error for each pattern in in test dataset.
Figure 11. Comparison of DA and neural network (NN) error for each pattern in in test dataset.
Processes 08 00634 g011
Figure 12. (a) Comparison of DA and NN error in test dataset; (b) comparison of DA and NN error in training dataset.
Figure 12. (a) Comparison of DA and NN error in test dataset; (b) comparison of DA and NN error in training dataset.
Processes 08 00634 g012
Figure 13. ISPC implementation flow chart.
Figure 13. ISPC implementation flow chart.
Processes 08 00634 g013
Figure 14. Normality test and process capability calculation.
Figure 14. Normality test and process capability calculation.
Processes 08 00634 g014
Figure 15. X-bar chart in “out of control” mode.
Figure 15. X-bar chart in “out of control” mode.
Processes 08 00634 g015
Figure 16. Process capability calculation.
Figure 16. Process capability calculation.
Processes 08 00634 g016
Figure 17. Request for analyzing the X-bar chart.
Figure 17. Request for analyzing the X-bar chart.
Processes 08 00634 g017
Figure 18. X-bar chart reasoning by Western Electric tests.
Figure 18. X-bar chart reasoning by Western Electric tests.
Processes 08 00634 g018
Figure 19. Pattern recognition in X-bar chart by NN.
Figure 19. Pattern recognition in X-bar chart by NN.
Processes 08 00634 g019
Figure 20. Determining Cpk, unnatural pattern, parameters, and recovery actions.
Figure 20. Determining Cpk, unnatural pattern, parameters, and recovery actions.
Processes 08 00634 g020
Figure 21. Inserting new dataset.
Figure 21. Inserting new dataset.
Processes 08 00634 g021
Figure 22. X-bar and R charts in control mode.
Figure 22. X-bar and R charts in control mode.
Processes 08 00634 g022
Figure 23. Capability of the process after performing corrective actions.
Figure 23. Capability of the process after performing corrective actions.
Processes 08 00634 g023
Table 1. Mostly recommended rules for detecting typical unnatural patterns.
Table 1. Mostly recommended rules for detecting typical unnatural patterns.
No.Unnatural PatternCharacteristic Signs in Control Chart
1Over-controlSingle point beyond control limits (above +3σ or below −3σ).
2ShiftSudden change (series of 9 points) above or below the central line.
3TrendContinuous movement (rise or fall) of 6 consecutive points.
4SystematicA point-to-point fluctuation (14 consecutive points alternating up and down).
5CyclePeriodic peaks and troughs (4 out of 5 points above +2σ or below −2σ).
6MixturesA run of consecutive points on both sides of the central line, all far from the central line (8 points in a row more than +1σ from centerline).
Table 2. Related works.
Table 2. Related works.
ModelAdvantageDisadvantageRelated Works
KNN
-
Very fast training (instance-based learning).
-
Very easy to implement.
-
Weakness in working with large dataset.
-
Memory limitation.
-
Sensitive to noisy data.
[15,18]
DT
-
Simple to understand, interpret, and generate rules.
-
May suffer from overfitting.
-
Unstable classifier.
[15,16]
NN
-
Does not need precise knowledge of interactions between the parameters.
-
Learns to recognize patterns during the training phase.
-
Able to handle noisy data.
-
High performance.
-
NN’s topology cannot be systematically determined.
-
Training of the network is prolonged, and processing for large NNs is difficult.
-
Needs a large amount of useful training samples.
-
Problem of overfitting.
[3,4,7,10,11,12,13,14,15,19,20,21]
ES
-
Availability, consistency, extensibility, and testability of the information.
-
Rules can be updated easily.
-
Problems of incorrect recognition for similar statistical properties (features overlapping).
[9,12,14,19,22,23]
SVM
-
Easily handles nonlinear, un/semi-structured, and high-dimensional data.
-
Overfitting problem is not as much as other methods.
-
With an appropriate kernel function, complex problems can be solved.
-
Computationally expensive.
-
Difficult understanding and interpreting the final model.
-
Long training time for large datasets.
[2,14,24,25,26]
Fuzzy
-
High precision.
-
Low speed and the long run time of the system.
[5,14,21,26]
Table 3. Simulator functions of CCPs and the range of corresponding parameters’ changes.
Table 3. Simulator functions of CCPs and the range of corresponding parameters’ changes.
Pattern TypeSimulator FunctionsParameter Change Range
Naturalx(t) = n(t)-
Shift (Sh.)x(t) = n(t) + u × b 1b = [1σ~3σ] ⇒ [0.1,0.3]
b = [−3σ~−1σ] ⇒ [−0.3, −0.1]
Trend (Tr.)x(t) = n(t) + s × t 2s = [0.1σ~0.3σ] ⇒ [0.01, 0.03]
s = [−0.3σ~−0.1σ] ⇒ [−0.03, −0.01]
Cycles (Cyc.)x(t) = n(t) + l 3 × sin((2πt)/T 4)l = [1σ~3σ] ⇒ [0.1,0.3]
T = 8,12, …
Systematic (Sys.)x(t) = n(t) + g 5 × cos (πt)g = [1σ~3σ] ⇒ [0.1,0.3]
Shift + Trend (Sh. + Tr.)x(t) = n(t) + u× b + s × tb = [1σ~3σ] ⇒ [0.1,0.3]
s = [0.1σ~0.3σ] ⇒ [0.01, 0.03]
b = [−3σ~−1σ] ⇒ [−0.3, −0.1]
s = [−0.3σ~−0.1σ] ⇒ [−0.03, −0.01]
Shift + Cycle (Sh. + Cyc.)x(t) = n(t) + u × b + l × sin((2πt)/T)b = [1σ~3σ] ⇒ [0.1,0.3]
l = [1σ~3σ] ⇒ [0.1,0.3]
T = 8, 12…
Trend + Cycle (Tr. + Cyc.)x(t) = n(t) + s × t + l × sin((2πt)/T)s = [0.1σ~0.3σ] ⇒ [0.01, 0.03]
l = [1σ~3σ] ⇒ [0.1,0.3]
T = 8, 12…
1 Shift magnitude. 2 Trend slope (s). 3 Amplitude. 4 Period (T). 5 Magnitude of variations (g).
Table 4. Number of inputs, hidden, and output layer neurons.
Table 4. Number of inputs, hidden, and output layer neurons.
Neuron
Input 25
Hidden LayerNatural (Network 1)1
Upward Shift (Network 2)
Downward Shift (Network 2)
12
12
Upward Trend (Network 3)18
Downward Trend (Network 3)18
Cycles (Network 4)20
Systematic (Network 5)4
Upward Shift + Upward Trend (Network 6)18
Downward Shift + Downward Trend (Network 6)18
Shift + Cycles (Network 7)27
Trend + Cycles (Network 8)27
Total175
Output 8
Table 5. Number of input, hidden, and output layer neurons.
Table 5. Number of input, hidden, and output layer neurons.
Network NameInputHidden LayerOutput
Shift25 + 1 (bias) = 26142
Trend152
Cycles223
Systematic172
Shift + Trend253
Shift + Cycle234
Trend + Cycle214
Table 6. Scaling range for outputs of MLPs.
Table 6. Scaling range for outputs of MLPs.
Pattern TypeShiftTrendCycleSystematicShift-TrendShift-CycleTrend-Cycle
Output 1min−0.4−0.040.010.01−0.40.010.001
max0.40.040.40.40.40.40.04
Output 2min0060−0.040.010.01
max121214120.040.40.4
Output 3min--0-066
max--12-121414
Output 4min-----00
max-----1212
Table 7. Module I, evaluation results.
Table 7. Module I, evaluation results.
Pattern TypeDirect IdentificationIdentification (Incomplete/Indirect)Wrong
Identification
Type I ErrorType II Error
Shift416/50 = 0.123/50 = 0.060.00-
Trend482/50 = 0.040.000.001/50 = 0.02
Cycle481/50 = 0.021/50 = 0.020.001/50 = 0.02
Systematic500.000.000.000.00
Shift + Trend500.000.000.000.00
Shift + Cycle500.000.000.000.00
Trend + Cycle440.006/50 = 0.060.000.00
Total Error21/400 = 0.0529/400 = 0.02210/400 = 0.0250.002/400 = 0.005
Table 8. Module II, evaluation results.
Table 8. Module II, evaluation results.
Network
Name
Maximum Cumulative Error (MCE)Training IterationMinimum Number in Each TrainingHidden Layer NeuronsOutput NeuronsError of MLPs
Shift101010,5371420.01
Trend181316,4231520.018
Cycle251035,3602220.016
Systematic121048,3431720.012
Shift + Trend181048,3432530.012
Shift + Cycle251146,7032340.012
Trend + Cycle281037,2982140.014
Table 9. Failure modes and effects analysis (FMEA) form for the critical parameter (crystal water).
Table 9. Failure modes and effects analysis (FMEA) form for the critical parameter (crystal water).
Failure Modes and Effects Analysis (FMEA)
Failure ModeIndexCause EffectCorrective ActionsOSDRPN 1
1Increase of Crystal WaterDecrease of middle temperature in the kiln body.Clogged fuel nozzle.Cleaning of nozzle, establishment of PM for burner, and installation of filter.35575
2Failure to adjust fuel pressure and inlet burner air.Adjustment of fuel and air regulator in a defined period.34448
3Pre-kiln breakage.Chang or patching of pre-kiln and thickness monitoring.25110
4Decrease of kiln temperature according to fuel pressure.Installation of a shutdown sensor for fuel pressure.2316
5Increase of Kiln RPM.Adjustment of RPM via frequency.15420
6Sharp decrease in temperature.-1212
7Erosion in kiln bladesChange of runner blade and periodically monitoring.15525
8Becoming dirty or clogging of the burner’s air nozzle.Cleaning the air nozzles and instating a multilayer filter.42324
9Heat transition between kiln and environment because of the lake of refractory.Establishment of PM and thickness monitoring of refractory.555125
10Increase of negative pressure of kiln (filter).Installation of ΔPMeter.55125
11Decrease of Crystal WaterIncrease of middle temperature in the kiln body.Fuel pressure rise.-1212
12Decreasing of negative pressure of exhaust fan.-55375
13Reduction of kiln RPM.-15420
14Sharp Increase in temperature.-1212
15Variation in Crystal Water-Change of trend in raw material because of mine.-45480
16Increased Crystal Water-High level of raw material moisture.-35115
17Variation in Crystal Water-Changing of raw material spec.-35115
1 RPN = O × S × D.
Table 10. Output of discriminant analysis (DA) for test dataset.
Table 10. Output of discriminant analysis (DA) for test dataset.
The SAS System (DISCRIM Procedure)
Classification Summary for Test Data Set Using Linear Discriminant Function
Number of Observations and the Percentage of Correctly Classified into the Target Classes
Target12345678Total
1171580080048
35.4231.2516.670.000.0016.670.000.00100.00
2925140074059
15.2542.3723.730.000.0011.866.780.00100.00
31012231060052
19.2323.0844.231.920.0011.540.000.00100.00
423048000053
3.775.660.0090.570.000.000.000.00100.00
500005000050
0.000.000.000.00100.000.000.000.00100.00
6711900190046
15.2223.9119.570.000.0041.300.000.00100.00
700000039039
0.000.000.000.000.000.00100.000.00100.00
803000044653
0.005.660.000.000.000.007.5586.79100.00
Total4569544950404746400
11.2517.2513.5012.2512.5010.0011.7511.50100.00
Priors0.120.14750.130.13250.1250.1150.09750.1325
Table 11. DA error in the test dataset classification.
Table 11. DA error in the test dataset classification.
Error Estimation for Target Classes
12345678Total
Rate0.64580.57630.55770.09430.00000.58700.00000.13210.3325
Priors0.12000.14750.13000.13250.12500.11500.09750.1325
Table 12. Output of discriminant analysis (DA) for training dataset.
Table 12. Output of discriminant analysis (DA) for training dataset.
The SAS System (DISCRIM Procedure)
Classification Summary for Training Data Set Using Linear Discriminant Function
Number of Observations and the Percentage of Correctly Classified into the Target Classes
Target12345678Total
121817512410000518
42.0833.7823.940.190.000.000.000.00100.00
2162227116302182539
30.0642.1221.520.560.003.901.480.37100.00
3158158163102100501
31.5431.5432.530.200.004.190.000.00100.00
4524647100111518
0.974.631.1690.930.000.002.120.19100.00
54100489000499
0.800.200.000.0098.000.000.000.00100.00
64925420031400430
11.405.819.770.000.0073.020.000.00100.00
701013013991415
0.000.240.003.130.000.2469.140.24100.00
8000505642477580
0.000.000.000.860.009.667.2482.24100.00
Total5966164514944894134604814000
14.9015.4011.2812.3512.2310.3311.5012.03100.00
Priors0.12950.134750.125250.12950.124750.10750.103750.145
Table 13. DA error in the training dataset classification.
Table 13. DA error in the training dataset classification.
Error Estimation for Target Classes
12345678Total
Rate0.57920.57880.67470.09070.02000.26980.03860.17760.3105
Priors0.12950.13480.12530.12950.12480.10750.10380.1450

Share and Cite

MDPI and ACS Style

Ramezani, J.; Jassbi, J. Quality 4.0 in Action: Smart Hybrid Fault Diagnosis System in Plaster Production. Processes 2020, 8, 634. https://doi.org/10.3390/pr8060634

AMA Style

Ramezani J, Jassbi J. Quality 4.0 in Action: Smart Hybrid Fault Diagnosis System in Plaster Production. Processes. 2020; 8(6):634. https://doi.org/10.3390/pr8060634

Chicago/Turabian Style

Ramezani, Javaneh, and Javad Jassbi. 2020. "Quality 4.0 in Action: Smart Hybrid Fault Diagnosis System in Plaster Production" Processes 8, no. 6: 634. https://doi.org/10.3390/pr8060634

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop