Abstract

Schedule management is an essential part of construction project management. In practical management affairs, many uncertainties may lead to potential project delays and make the schedule risky. To quantify such risk, the Probabilistic Critical Path Method (PCPM) is used to compute the overdue probability. Survey shows it could help project managers understand the schedule better. However, two critical factors limited the application of PCPM: computational efficiency and timeliness. To solve these constraints, we combined subset simulation and statistical learning to build a computationally efficient and dynamic simulation system. Numerical experiment shows that this method can effectively improve the computation efficiency without losing any accuracy and outperforms the other approaches with the same assumptions. Besides, we proposed a machine learning-based way to estimate task duration distributions in PCPM automatically. It collects real-time progress data through user interactions and learns the best PERT-Beta parameters based on these historical data. Our estimator provides our simulation system the ability to handle dynamic assessment without laborious human work. These improvements reduce the limitations of PCPM, making the application of PCPM in practical management affairs possible.

1. Introduction

Schedule management is a vital part of construction project management. Overdue projects will bring potential economic losses to the project. But, on the other hand, groundless period reductions will inevitably bring risks to the project. So, applying scientific management methods is particularly important to evaluate the project schedule and control the risk of delay.

Critical Path Method (CPM) is a widely accepted and well-used tool in planning, controlling, and scheduling construction projects. It can help the project manager to identify the critical path in the schedule and determine the project duration when all tasks’ durations are determined [1]. However, in practice, there exists uncertainty and volatility in projects. These will undermine classical deterministic CPM. To deal with this situation, lots of improvements were put forward to extend CPM to a nondeterministic environment. Unlike classical CPM, extended CPM computes the critical path on assumption that the task duration is nondeterministic, and it has to estimate the probability of project delay. This feature provides the project manager a quantitative way to evaluate the risk in project progress and makes extended CPM more appealing. Generally, major existing methodologies used in extended CPM could be classified into three types.

The “most critical path” method was firstly proposed by Soroush in 1994 [2]. This method focuses on the “most critical path” problem and provides a heuristic to identify the near-optimal path to such a problem. Since this method only uses the “most critical path,” rather than the entire schedule network, to compute the delay probability, it runs very fast and is more accurate than the classical CPM approach. This method provides a good perspective to understand critical path [3] as well as an extensible solution framework. Some researchers combined it with artificial intelligence to find the optimal critical path [4, 5], and some extended it to fuzzy domain [6].

Unlike classical CPM, Fuzzy CPM (FCPM) represents an activity duration by replacing a real number with a fuzzy number and uses fuzzy operations to calculate the fuzzy critical path and the fuzzy project duration. By combining different kinds of fuzzy numbers and fuzzy operations, researchers put forward many constructive and useful methods. These contributions resolved some critical defects of FCPM, like negative time and invalid fuzzy subtraction, and provide an efficient way to resolve the fuzzy critical path problem for a project network. Nowadays, many researchers are still working on its variants and verifying its effectiveness and practicability in practice [710].

Similar to Fuzzy CPM, Probabilistic CPM (PCPM) represents an activity duration by a random variable. This makes the completion time also a random variable. The ultimate goal of PCPM is to find out the probability where the completion time is higher than the given deadline and identify the critical path in a probabilistic context. Researches showed that because of its simple probabilistic interpretation, this methodology could be easily understood by project managers and help them to manage a project better as well as minimizing delay risks [1115].

It is worth noticing that, though formulated in a simple way, PCPM is in fact hard to solve and is still under study. Four key factors might explain such difficulty. First, PCPM is a high-dimensional integral of the probability density function on the failure domain defined by an implicit limit state function. Second, this limit state function is composed of several mutual-referenced and nested computation blocks, making it behave like a black box system. Third, since there are lots of nonlinear logical operands, like max and min, in nested blocks, the limit state function is also highly nonlinear. This fact makes some popular delegate models, like First-Order Reliability Method and Response Surface, fail to approximate the target function. Fourth, Direct Monte Carlo Simulation (DMCS) is time-consuming to reach an unbiased numerical estimation, which might be seemingly unpromising in practice.

Considering all the difficulties described above, we suggest adopting a well-designed black box system reliability algorithm, which could solve a reliability assessment problem without caring about the details of our target system. Subset simulation is such an algorithm. It is a numerical method widely used in structural engineering [1618] and soil engineering [19, 20]. Existing research studies showed that subset simulation is computationally efficient in estimating small failure probabilities in high-dimensional black box reliability problems. But this method is rarely used in project management. Given the similarities in implicity and nonlinearity between our system and existing researches, we think subset simulation will also work on the PCPM problem.

Apart from the computation efficiency problem, there is another important issue to address. Most researches mentioned above did not consider the timeliness of assessment. As a project goes on, the task duration estimations in the schedule will change. A feasible solution is to reestimate them manually, but it will bring lots of extra workloads, which could be avoided. To realize an automated real-time reliability assessment, we introduce statistical learning into PCPM. This method can handle the dynamic evaluation in the construction life cycle without much extra work. Based on historical progress data collected from management software, the duration distribution of a task could be estimated automatically using machine learning. This feature would make PCPM more useful in practical management affairs.

In this study, we will provide a new method, combining subset simulation and statistical learning to resolve the project delay problem efficiently and dynamically. The three primary research objectives are described as follows:(1)Introduce subset simulation-based PCPM and demonstrate its working process as well as implementation details(2)Illustrate how statistical learning can be plugged into PCPM, propose the cost function of this learning system, and derive the formula of task duration distribution given collected data(3)Verify the effectiveness of the proposed method and compare it with other methods

The organization of the rest of this paper is summarized as follows. Section 2 provides a short introduction to Critical Path Method (CPM) as well as a well-designed algorithm solving Critical Path Problem and then formulates the Schedule Reliability Problem. This problem will be solved in Section 3, using subset simulation. This method is a computation-efficient algorithm to estimate small failure probability. We will demonstrate how to use this method to solve the Schedule Reliability Problem. After that, we will explain how we can apply statistical learning in our PCPM system in Section 4. By combining these parts, Section 5 will demonstrate our method as a whole and display its design and implementation. Then, we used a hypothetical test case to compare our proposed method with other models in Section 6. Finally, Section 7 gives our concluding remarks.

2. Preliminaries

In this section, we will first formulate the Schedule Reliability Problem we want to resolve. Then, we will introduce a well-designed algorithm, solving the critical path problem. This algorithm will work as an implicit subexpression of the Schedule Reliability Problem. Basics of Critical Path Method and Schedule Network are omitted for conciseness.

2.1. Schedule Reliability Problem

A schedule does not always work as project managers expect. Sometimes it might fail. In other words, the real completion time of a project would exceed the time limit given by the contract. This is a disastrous thing we need to avoid. To quantify delay risk, we assume that the completion time of a project is a random variable. Given a deadline, the delay probability could be formulated as follows:where is a vector collecting the duration variables of every task; is an implicit function transforming durations of tasks to the completion time; is an indicator function which takes 1 if is positive; otherwise, it takes 0; is the joint probability density function of the duration variables. Thus, our goal is to compute the delay probability and ensure it is less than a given threshold, for example, 5%.

2.2. Modified Dijkstra Algorithm

To solve equation (1), we need to solve the Critical Path Problem first. Since the critical path is the longest in a schedule network, we could compute the completion time and get the Critical Path using the Modified Dijkstra Algorithm (MDA). This algorithm is proposed by Ravi Shankar and Sireesha to find the Critical Path in Schedule Network [21].

MDA uses Activity-On-Node (AoN) representation and represents a schedule by a graph , denoted as a tuple . are the vertices in the graph, representing the tasks in the schedule. For each task , a vertex attribute is assigned, defining task’s duration. are the directed edges in the graph, representing the work dependencies.

MDA improves the classic Dijkstra Algorithm by adding a topological sorting [22] before computing the critical path. This improvement could improve efficiency significantly, especially when repeat sampling is needed. After topo-sorting, the forward pass is applied to get the earliest start (ES) and earliest finish (EF) of each task. The completion time is the latest EF among all the tasks. Once completion time is computed, the backward pass is applied to get the latest start (LS) and latest finish (LF) of each task. Total Float (TF) is the difference between the Earliest Start (ES) and the Latest Start (LS) of an activity. A task is a critical task when its total float is zero.

The pseudocode of MDA is given as follows, and note that the nodes in MDA are presorted in topological order, which satisfies two indexing constraints and (Algorithms 1 and 2).

(1)
(2)for each
(3)
(4)for
(5)
(6)if
(7)foreach
(8)ifthen
(9)else
(10)ifthen
(11)return
(1)for each
(2)
(3)forto 1
(4)
(5)
(6)if
(7)for each
(8)ifthen

3. Subset Simulation

3.1. Basic Idea

Since is a nested and implicit function, it is hard to solve equation (1) analytically for most schedules. Instead, Monte Carlo Simulation is used to get a numerical solution, formulated as follows:

However, Direct Monte Carlo Simulation (DMCS) is known to be computationally inefficient. To get a stable estimation, lots of samples are required, making the computation process time-consuming. To solve equation (2) efficiently, subset simulation is applied in our study.

Subset simulation was first proposed by S. K. Au to estimate the small failure probability of a structural system in high dimensions. Because of its efficiency, this method was widely used in structural engineering and soil engineering.

The basic idea of subset simulation is to express a small probability as a product of a series of larger conditional probabilities by introducing intermediate failure events. Since intermediate failure events are much easier to solve, this decomposition could reduce unnecessary sampling and accelerate the computation. This process could be conceptually expressed as follows:where is the generated time limit in th iteration and there is in final iteration. According to the definition above, two critical questions should be answered:(1)How to choose the intermediate failure event , that is, the value of ?(2)How to generate samples from conditional event ?

In the following parts, these questions will be answered by Au’s standard subset simulation algorithm. And we will illustrate how it could solve Schedule Reliability Problem.

3.2. Standard Subset Simulation Algorithm

For simplicity, our study used S. K. Au’s standard algorithm. In this algorithm, samples in each iteration are generated by Markov Chain Monte Carlo (MCMC) using seeds, and intermediate time limits and seed samples in iteration are chosen by a fixed threshold percentile [16].

There are only two parameters used in standard algorithm: batch size which is the size of samples in each iteration and seed ratio which is the ratio between seeds and samples in each iteration.

3.2.1. Intermediate Event Selection

Suppose a batch of samples is known; we could compute their corresponding completion time using Modified Dijkstra Algorithm. Then, we sort these times in a decreasing order. By selecting the samples with the top longest completion time, we can ensure that the intermediate events satisfy .where and is the th largest completion time among samples.

3.2.2. Conditional Batch Sampling

The simplest sampling policy is the Metropolis–Hastings sampling, which is based on a symmetric proposal with . In our study, transition is used. We accept the move on Markov chain with an acceptance probability of .

is the unnormalized conditional distribution.where is PERT-Beta Distribution described in Section 4.1 and is the intermediate failure event .

3.2.3. Procedures

The pseudocode of standard algorithm is shown below (Algorithm 3).

(1)generate initial samples:
(2)repeat
(3) compute in -th batch samples
(4) compute
(5)if
(6)  select seeds and intermediate failure event using method in Section 3.2.1
(7)  generate next batch of samples using method in Section 3.2.2
(8)else
(9)  break
(10)return

This process is illustrated in Figure 1.

4. Dynamic Distribution Estimation

In this section, a historic data-based duration distribution estimation method is proposed. This method uses the PERT-Beta distribution as the probabilistic model. And it takes three assumptions to make the problem easier to solve. Finally, it uses gradient descending, a widely used optimization tool in machine learning, to estimate observational duration distribution.

4.1. PERT-Beta Distribution

To make PCPM work, the duration of each task should be modelled properly. An empirical distribution is usually used when there is a lack of observational data. Some survey shows beta distribution is a suitable one [2325]. This distribution requires three empirical parameters: most likely duration , optimistic duration , and pessimistic duration of the activities. The process of defining these subjective values is called the PERT three-point estimation method.

The density function of PERT-Beta Distribution is given by equation (6) and illustrated in Figure 2.where parameters in beta function are determined by

However, estimation is not a static thing. With the project going on, time distribution should be corrected and reestimated according to the status quo. The project manager could do these works but will spare lots of effort. It would be time-saving if we could estimate the distribution automatically.

4.2. Assumptions

Statistical learning could be applied in our work to estimate distribution automatically. However, to build such a learning system, we need some basic assumptions to construct the cost function of this system. In our study, we make three assumptions about task progress and time distribution.(1)Constant Working Rate. In project management, the relationship between time and remaining works, burndown chart, a curve depicting is usually an S shape curve. But to simplify the computation, we assume that this curve is a straight line. This assumption works on optimistic, average, and pessimistic working rates , , and . Given any remaining progress , the parameters could be computed using linear interpolation as follows:(2)Fixed Risk Preference. Parameters and control the shape of Standard Beta distribution, deciding its skewness, or in another perspective, a likelihood of a task finished in a short time. If a task is more likely to be completed in a shorter time, is greater and is smaller. To some degree, they reflect an implicit risk preference of a project manager. So, in our study, once and are chosen, they do not change anymore.(3)Two-Step Estimation. There is a growing uncertainty when we want to predict the future, while with progress going on, this uncertainty could be diminished. To depict this fact, we take a two-step method to estimate observational distribution. We first assume all the observations come from the “future” and use them to estimate the working rate at different conditions. Then, these working rates are used to construct the final distribution using the current state.

These assumptions are illustrated in Figure 3.

4.3. Statistical Learning

Based on the assumptions before, we could derive the cost function of our learning system. According to equation (9), the likelihood could be expressed in

Since and are assumed to be fixed and the working rate is constant, by taking the negative log-likelihood of observations, cost function could be derived as follows:

Thus, our purpose is to optimize the loss function and estimate three working rates , , and . Taking the partial derivative of loss function, we will have

Using gradient descending, we could have numerical optima using the update rule as follows:

After determining and , could be computed using the definition of ; see equation (7).

With all three working rates , , and computed, the final estimation could be determined. Suppose the current remaining progress is ; then the updated range could be given as follows:

5. System Implementation

5.1. Design of Simulation System

A detailed design of our system and the relationship between major classes are described in the class diagram in Figure 4. Some important members and methods of major classes are presented, but some trivial members or methods are omitted for lack of space.(1)The class App is the entry class that handles all the interactions from users: we could configure the simulation using class Setting. The Setting class defines the time limit , batch size , and candidate ratio . And it affects the result and performance of subset simulation. We use class RecordManager to add a progress record to the database and estimate the duration distribution of a task. We use class SubsetSimulation to evaluate our schedule and get some useful indicators.(2)The class RecordManager is the manager class that takes in the record inputs and stores them. Besides, it also provides the function to estimate the distribution of a task using historical data. This process is described in Section 4. And when a simulation begins, the estimated beta distribution parameters would be provided to samplers to construct a batch of samples.(3)The class SubsetSimulation is the entry class where our simulation happens. This class computes the failure probability and sets the average total float of tasks respectively. We described this process thoroughly in Section 3. SubsetSimulation will use MDA class to compute the Critical Path and completion time of the schedule. You can find the theory about MDA in preliminaries.(4)The class Graph is a helper class that represents a schedule network using an adjacency list. It contains the topological structure of the schedule network and could be used to find the successors or predecessors of any given task. The structure is stored in database separately, using class Task and Dependency. These two classes correspond to two tables, respectively, using Object Relation Mapping (ORM).

These classes will generate three kinds of data:(1)Persistent constant: these data cannot be changed and stored persistently in memory, like graph structure and history records(2)User-defined variable: these data are generated by user interactions, like configuration parameters of simulation(3)Intermediate variable: these data are generated by program and will only be used once, like beta distribution parameters and sample

The data flow diagram explaining how data are passed between classes is shown in Figure 5.

5.2. Process of Simulation System

The following process describes all the steps in our simulation system, including the creation of the schedule network, distribution estimation, and estimation for failure probability.(1)Schedule Network Construction. Construct the schedule network using an adjacency list, and presort the network using Khan’s algorithm to get the topological order [22]. In this step, the network could be validated; for example, circular working dependency could be detected. Besides, precomputed topo-order could save unnecessary computing time in subset simulation sampling.(2)Task Duration Distribution Estimation. Use empirical parameters and observational data to estimate the time distribution for each task. This step is thoroughly described in the previous section.(3)Failure Probability Estimation. Estimate the failure probability of a given schedule network using subset simulation and check whether the answer is acceptable. Usually, a criterion is chosen (e.g., 1% or 5%). If the estimation is higher than this criterion, we have to consider adjusting our plan.(4)Schedule Adjustment. If failure probability is unacceptable, we have to find out the critical activities in the schedule like in classical CPM. By averaging the total float in samples, we could find those critical activities with small floats. We could allocate more resources to these activities to reduce the completion time and further reduce the failure probability.

5.3. Features and Limitations

There are three practical features in our proposed simulation system:(1)Schedule Evaluation. Using the algorithm in Section 3, we could use a failure probability to quantize the delay risk. If this probability is too high, a project manager needs to change the schedule or extend the deadline. To modify the plan, we have to find critical activities and lower their duration. To prolong the deadline, we can use the empirical distribution of completion time to select a proper deadline.(2)Critical Activities Detection. When there is a high possibility for a schedule to be late, critical activities could be located using a threshold of total float. If the total float of a task is lower than this threshold, it is a critical activity. This threshold describes the flexibility we can manage. A lower threshold will sift out fewer activities and usually requires a better management capability.(3)Task Duration Prediction. When the records are stored, the estimation of task duration will also tell us the most likely finish date. We could compare this date to the planned finish date. If there is an unacceptable lag, this activity would be marked. And the system will alert the project manager. This process is automatic and data-driven and could improve project manager’s efficiency and provide more information.

All these three features are implemented through two web pages, one for collecting progress data and one for visualizing and analyzing project progress; see Figure 6.

Though there are some practical features in our method, there are still some limitations we are striving to solve. First, the threshold may not be a proper way of finding critical activities. Since there is a possibility that an improper threshold may select all or none of the tasks, such a threshold would be unhelpful. This fact may make threshold selection a potential problem. Besides, the total float threshold only captures the average behaviour of a task. It could help reduce the risk of schedule while reducing the variance of task duration will also work. Currently, there is no indicator describing the relationship between the variance of task durations and the risk of schedule in our method.

6. Illustrative Examples and Computational Results

This section will verify the performance of our proposed method and compare it with DMCS, Soroush’s LUBE [2], and Chen’s FCPM [10] in terms of precision, efficiency, and stability.

6.1. Soroush’s Benchmark Example

In this part, we will use an illustrative example from Soroush’s work. This hypothetical test case consists of 21 activities shown in Figure 7. Our experiment also takes the assumptions in Soroush’s work. The activity times are assumed to be beta distributed, and events and are the starting and terminal events, respectively. The optimistic time , most likely time , and pessimistic time for each activity are given in the parentheses beside that activity. To compare our work to Fuzzy CPM (FCPM), each activity’s fuzzy duration is formulated in a triangular fuzzy number using the same parameters in PCPM.

For each method, we repeat simulations 20 times to get the mean estimation and its average running time . And to compare the stability of numerical results, variation coefficients are also computed. As for the setting of sampling-based methods, 50,000 samples were used in DMCS, and a batch size of and selection ratio were used to configure the Subset Simulation. Our experiments were conducted on a PC with an 8-core Intel Core i7-0700K 3.6 GHz CPU. All the programs were coded in Python 3.7.3 and NumPy 1.20.1.

Computation results are shown in Table 1. Since Direct MCS and our method are based on sampling, generally they are slower than the one-pass method, like LUBE and FCPM. This is a common shorthand of sampling-based PCPM, but subset simulation accelerates this process while nearly retaining the same precision and stability. This makes the application of PCPM possible. On the other hand, compared to the one-pass method, subset simulation-based PCPM reaches the lowest error compared to Direct MCS. This result shows that subset simulation could provide a more reliable solution. In all, subset simulation combines computation performance and efficiency. These make it a probably good way to solve the probabilistic CPM problem.

By drawing the relationship between completion time limit and failure probability in Figure 8, we also find that the curve given by subset simulation is the closest one to the DMCS one, while there exists a systematic error in LUBE and FCPM. This observation could be explained by the fact that LUBE only focuses on one most critical path, while other paths may also lead to failure, leading to a higher failure probability. Fuzzy numbers and fuzzy operations cannot capture all the natural complexity of probabilistic modelling. Thus, deviation will occur. In contrast, subset simulation could depict this relationship very well. This would provide the project manager a reliable decision basis.

Another important indicator, total float, is also computed. A precise estimation of float would help the project manager identifying important tasks and allocate resources to them. Figure 9 shows that the total float estimation given by FCPM is lower than the ones given by PCPM, which may lead to more resource demand and hurried plans, while subset simulation gives a similar result to DMCS, and numerical errors are acceptable.

6.2. A Schedule Network in Real Project

In order to verify the performance of our algorithm in real practice, a schedule network from a real project containing 173 activities and 195 work dependencies is used. This project is a multifunctional building, including three parts: the main tower for office usage, the auxiliary building for business, and the basement for parking. The construction work is broken down in the following way: the basement is divided into 24 individual working segments each floor and auxiliary building is divided into 3 individual working segments each floor. Each segment is constructed floor by floor, which creates working dependencies. The division of our project and schedule network are illustrated in Figure 10.

The activity times of each working segment are given in Table 2.

The hardware and software environments of this experiment are consistent with those of the previous one. As we can see in Table 3, comparing with other methods, our estimation is the most closet to the DMCS result. And LUBE is still lower than ours, which will be explained in the following discussion. Another point worth noting is that with the growth of the schedule network’s size, our method’s computational efficiency advantage becomes more obvious compared with DMCS. And its running time is acceptable in real practice. Project managers can get more accurate estimates while waiting less time.

By drawing the relationship between completion time limit and failure probability in Figure 11, we also find that curve given by our method is the closest one to the DMCS one. Still, LUBE curve is under DMCS curve, which means it underestimates the overdue risk. This fact is caused by the inherent flaw of LUBE and would mislead project managers to make an excessively optimistic estimation, which may cause potential overdue risks. But our method solved this problem well: since our method will traverse the whole schedule network, rather than one path, our PCPM will give a comprehensive consideration which depicts the risk of the network as a whole, rather than only one “most critical” path. This mechanism makes our method work well for the full range of time limit.

6.3. Discussions

From the experiments above, we can see that although those one-pass algorithms could give results quickly, due to the inherent deficiencies of algorithm design, there will be systematic errors. For example, LUBE fails when the failure probability is relatively high. This could be explained easily by a very simple example. Supposing there is a project containing identical paths. And each path has an independent failure probability . Thus, we can easily get the failure probability of this project using some basic probability theory. The right answer should be . However, according to LUBE, the failure probability is defined by the “most critical” path. Since all the paths are identical, LUBE estimation is . Notice that

So, LUBE estimation is always lower than the failure probability of the whole schedule. This difference is especially obvious when the time limit is relatively tight. In this situation, the failure probability of each path will increase. For , only works when , which means shall be very small. However, with a tight time limit, cannot be small, which makes LUBE fail to approximate. Besides if there is more than one critical path, according to equation (15), the difference between and will become obvious. But this situation is common in those projects where parallel construction is needed. Besides, in most cases, paths are inherently correlated. If a node is overdue, all the paths containing this node will be affected.

These observations tell us when dealing with the schedule network, it will be better to treat the schedule as a whole, rather than a group of single paths. Though we have to admit sampling-based PCPM takes time during sampling, it is still worth doing it, since it can provide a more accurate estimation. From the experiments above, we can find that subset simulation can accelerate the sampling process greatly, making the running time of PCPM acceptable.

7. Conclusions

This paper proposed a new PCPM based on a data-driven subset simulation to solve the Schedule Network Failure Problem in a dynamic way. This method could compute the failure probability efficiently and effectively without loss of any result accuracy and outperforms the other approaches with the same assumptions. Besides, another important contribution of our study is to plug a data-based task duration distribution estimation into PCPM. This finding provides a more objective way to estimate task duration distribution, reducing the variance in project managers’ experience and improving the knowledge sharing between teams. These key features would provide a good foundation for applying PCPM in practical management affairs.

Data Availability

The data used to support the findings of this study are included within the article.

Conflicts of Interest

The authors declare that they have no conflicts of interest regarding the publication of this study.