Introduction

Rapid understanding of the magnitude of large earthquakes and their associated fault area is crucial for disaster response. Numerous studies have shown the advantages of the high-rate (typically 1 Hz sampling) Global Navigation Satellite System (GNSS) as a broadband sensor that can directly measure displacement without saturation (Larson et al. 2003; Ohta et al. 2006; Larson 2009). Using onshore high-rate GNSS data enables the rapid estimation of earthquake magnitude and tsunamigenic potential within a short period (Blewitt et al. 2006, 2009). The 2011 Tohoku-Oki earthquake (Mw 9.0) and its associated tsunami reaffirm the importance of accurate tsunami early warning systems (Ozaki 2011). Numerous studies conducted to rapidly determine the size of the earthquake based on real-time GNSS data followed this event (Crowell et al. 2012; Ohta et al. 2012, 2015; Melgar et al. 2012; Ohta 2016; Tanaka et al. 2019). Melgar et al. (2012) proposed a new method called fastCMT (fast Centroid Moment Tensor) to determine the moment tensor and centroid location of earthquakes using real-time GNSS data. Crowell et al. (2012) investigated two different methods for inverting heterogeneous coseismic slip distribution using real-time GNSS time series. One method used pre-defined fault planes, such as the plate interface, while the other used the fastCMT solution proposed by Melgar et al. (2012), which estimates the CMT solution using GNSS data. Ohta et al. (2012, 2015) proposed an algorithm to detect and estimate permanent displacement due to the coseismic slip from real-time kinematic GNSS (RTK-GNSS) time series. They applied the developed algorithm to the 2011 Tohoku-Oki earthquake, and pointed out that the estimated moment release reached Mw 8.7 within five minutes after the origin time, which is close to the actual moment magnitude (Mw 9.0). Numerous other efforts have also shown the advantage of real-time GNSS data for the estimation of large events, including the studies of Hoechner et al. (2013), Colombelli et al. (2013), Melgar and Bock (2015), and Murray et al. (2019).

The Geospatial Information Authority of Japan (GSI) operates a continuous nationwide GNSS network, the GNSS Earth Observation Network System (GEONET), across Japan. Based on the lessons learned from the 2011 Tohoku-Oki earthquake, the GSI and Graduate School of Science, Tohoku University, have jointly developed a new crustal deformation analysis system using GEONET data for quasi real-time earthquake size estimation, known as the GEONET analysis system for rapid deformation monitoring (REGARD) (Kawamoto et al. 2016, 2017). This system consists of three major subsystems which perform the following functions: (1) real-time estimation of the displacement time series using real-time GNSS data, (2) automated extraction of displacement fields caused by a large event, and (3) automated coseismic fault model estimation via an approximated single rectangular fault model and coseismic slip distribution along the subducting plate interface.

The single rectangular fault model is one of the simplified models that can explain coseismic fault motion. It is difficult to explain the complex rupture processes, such as inhomogeneous slip distribution, for large earthquakes (approximately > Mw 8.0). If only the GEONET sites (site spacing: 20–25 km) are used, most Mw 7 events can be explained by the single rectangular fault model as first-order approximation. This model has the disadvantage of using a single rectangular fault patch to represent complex large earthquake processes (Kawamoto et al. 2017) and modeling a non-interplate earthquake (~ Mw 7.5) whose geometry is difficult to assume in advance (e.g., 2016 Kumamoto and 2019 Yamagata-Oki earthquakes). Such events have frequently occurred in the past 20 years in Japan [e.g., 2004 Chuetsu earthquake (Mjma 6.6); 2007 Noto earthquake (Mjma 6.9); 2007 Chuetsu-Oki earthquake (Mjma 6.9); 2008 Iwate–Miyagi Nairiku earthquake (Mjma 7.2); 2016 Tottori-ken Chubu earthquake (Mjma 6.6)]. Thus, accurate and rapid estimation of the single rectangular fault model is highly important.

The estimation of a single rectangular fault model is a nonlinear problem, because all components should be estimated simultaneously. To solve this problem, the REGARD system adopts the maximum likelihood approach with a priori information (Matsu’ura and Hasegawa, 1987) based on tectonic background. For the 2016 Kumamoto earthquake (Mw 7.0), REGARD successfully estimated the single rectangular fault model automatically in real time (Kawamoto et al. 2016). However, a problem that required a solution appeared during the estimation process. To estimate the single rectangular fault model, the REGARD system should provide initial values of the unknown parameters. For the 2016 Kumamoto earthquake, REGARD assumed a strike-slip fault model as the initial model with conjugated fault parameters and required a long convergence time to obtain a reliable coseismic fault model (Kawamoto et al. 2016). Since the estimation strongly depends on the a priori information of the fault model, estimating the uncertainties of the coseismic fault model is difficult but necessary for the initial response of decision makers, such as governmental agencies.

Recently, numerous geodetic studies have adopted Bayesian approaches (Ito et al. 2012; Dettmer et al. 2014; Minson et al. 2014a, b; Jiang and Simons 2016; Ohno and Ohta 2018). These studies have also utilized those approaches for uncertainty estimation. The uncertainties of unknown parameters should contain both the observations and the modeling errors and be estimated without a priori constraints. To realize the Bayesian modeling, many previous studies adopted the Markov Chain Monte Carlo (MCMC) method, which produces a correlated sample for estimating expectations with respect to a target distribution. However, most previous studies could not use the MCMC in real time, because it is computationally expensive and requires hyperparameters to be set up in advance via trial and error. In contrast, Minson et al. (2014a, b) investigated an inversion strategy capable of using real-time high-rate GNSS data to simultaneously solve for a distributed slip model and fault geometry in real time. By adopting an analytical Bayesian approach, they solved the complex inversion problem (including calculating the uncertainties) in real time. Their method first determines the fault geometry (strike and dip of fault plane) and then estimates the slip distribution. It efficiently estimates the fault geometry and fault slip simultaneously, although it mainly focuses on the slip distribution and its uncertainties on the “most likely” fault geometry. Since moderate-sized non-interplate earthquakes frequently occur in Japan and some of them occur along the blind fault (e.g., 2008 Iwate–Miyagi Nairiku earthquake; Ohta et al. 2008), the geometry of the earthquake faults should not be assumed in advance. Furthermore, the uncertainties of the fault geometry and other unknown parameters, such as the rake angle and slip amount, will be traded-off between them. Thus, accurate estimation of a single rectangular fault model that includes uncertainties associated with fault model geometry is still required.

In this study, a new method was developed to rapidly predict the parameters of a coseismic single rectangular fault, along with the associated uncertainties, in real time using a full Bayesian approach. The algorithm was developed to replace the pre-existing fault model estimation function of REGARD. In the remainder of the paper, we first describe how to overcome the problems in the MCMC method for real-time estimations. Then, we present the application of the proposed method to past and simulated events. Finally, we discuss the utilization and significance of the obtained uncertainties.

Methods

We developed a new algorithm for the quantitative assessment of the estimated coseismic single rectangular fault model using a Bayesian approach in real time. Our algorithm used MCMC to obtain a probability density function (PDF) of fault parameters. To reduce the calculation time and accelerate the automatic adjustment of the hyperparameters for real-time prediction, we adopted a multi-stage approach with a parallel tempering technique. We refer to this new algorithm as the “Real-time automatic uncertainty estimation of the coseismic single rectangular fault model based on GNSS data” (hereafter, RUNE) and describe it in detail in this section.

Bayesian inversion

Let \( {\varvec{\theta}} \) be a model parameter vector that contains fault parameters of a single rectangular fault model (Okada 1992) and \({\varvec{d}}\) be a permanent displacement data vector with three components (horizontal and vertical components) based on real-time GNSS observations and analysis. We define latitude and longitude as the center of the fault and depth as the top of the fault. Bayesian estimation of the unknown model parameters conditional to the observations is based on Bayes’ theorem:

$$p\left({\varvec{\theta}}|{\varvec{d}}\right)=\frac{{\varvec{p}}\left({\varvec{d}}|{\varvec{\theta}}\right) {\varvec{p}}\left({\varvec{\theta}}\right)}{{\varvec{p}}\left({\varvec{d}}\right)},$$
(1)

where \(p\left({\varvec{d}}\right)\) is the PDF of the observations, \(p\left({\varvec{\theta}}\right)\) is the prior PDF of the model parameters (i.e., conditional PDF), \(p\left({\varvec{d}}|{\varvec{\theta}}\right)\) is the likelihood function, and \(p\left({\varvec{\theta}}|{\varvec{d}}\right)\) is the posterior PDF of the model parameters. Note that \(p\left({\varvec{d}}\right)\) is constant, because observations are fixed values, and hence, the posterior PDF is proportional to the product of the prior PDF and the likelihood function. The likelihood function measures the degree of fit between observed \({\varvec{d}}\) and calculated data \(\widehat{{\varvec{d}}}\left({\varvec{\theta}}\right)\). The residuals are given by \({\varvec{r}}\left({\varvec{\theta}}\right)=\widehat{{\varvec{d}}}\left({\varvec{\theta}}\right)-{\varvec{d}}\). Given that the number of stations is \(N\), the dimension of \({\varvec{d}}\; is\; 3N\). Assuming that the estimation error follows a normal distribution function and standard deviations \({\sigma }_{i} \left(i=East \left(E\right), North \left(N\right),UD (U)\right)\) at each GNSS station are identical for each horizontal (EN) and vertical (U) component, the likelihood function is defined as follows:

$$\begin{array}{c}p\left({\varvec{d}}|{\varvec{\theta}}\right)={L}_{EN}\left({\varvec{\theta}},\boldsymbol{ }{\sigma }_{EN}\right) {L}_{U}\left({\varvec{\theta}},{\sigma }_{U}\right)=\frac{1}{\sqrt{{\left(2\pi {{\sigma }_{EN}}^{2}\right)}^{2N}}}\text{exp}\left(-\frac{1}{2{{\sigma }_{EN}}^{2}}{{{\varvec{r}}}_{EN}}^{\mathrm{T}}{{\varvec{r}}}_{EN}\right)\frac{1}{\sqrt{{\left(2\pi {{\sigma }_{U}}^{2}\right)}^{N}}}\text{exp}\left(-\frac{1}{2{{\sigma }_{U}}^{2}}{{{\varvec{r}}}_{U}}^{\mathrm{T}}{{\varvec{r}}}_{U}\right),\end{array}$$
(2)

where \({L}_{i}\left({\varvec{\theta}},\boldsymbol{ }{\sigma }_{i}\right)\) indicates the likelihood functions separated into horizontal and vertical components, and \(\mathrm{T}\) indicates the transposition. The prior PDF \(p\left({\varvec{\theta}}\right)\) represents priori information about the model parameters, assuming a uniform distribution in each assumed search range. Bayes’ theorem (Eq. (1)) states that the prior PDF is updated to the posterior PDF by considering the observational data in the form of a likelihood function. By sampling the posterior PDF with the MCMC sampler, the fault parameter estimations can be obtained as PDFs.

The basic Metropolis–Hasting (M–H) method was adopted (Metropolis et al. 1953; Hastings 1970) for sampling in this study. Specifically, sampling is performed by repeating the following procedures:

  • Step 1: Generate a transition candidate, \({{\varvec{\theta}}}^{\mathrm{new}}\)(= reference \({\varvec{\theta}}\) + perturbation \(\Delta{\varvec{\theta}}\)).

  • Step 2: Calculate \(\widehat{{\varvec{d}}}\left({\varvec{\theta}}\right) \mathrm{and}\; \widehat{{\varvec{d}}}\left({{\varvec{\theta}}}^{\mathrm{new}}\right)\) based on Okada’s method (1992) and calculate an acceptance probability, \(\alpha \).

  • Step 3: Update the reference \({\varvec{\theta}}\) to \({{\varvec{\theta}}}^{\mathrm{new}}\) with a probability \(\alpha \). If probability \(\alpha \) is not satisfied, the old \({\varvec{\theta}}\) is used.

We assume that perturbation \(\Delta{\varvec{\theta}}\) is a uniform random number with \(\pm\Delta {{\varvec{\theta}}}^{\mathrm{max}}/2\) as the maximum step size. In the M–H method, the acceptance probability \(\alpha \) is defined as follows:

$$\begin{array}{c}\alpha =min\left(1, \frac{p\left({\varvec{d}}|{{\varvec{\theta}}}^{\mathrm{new}}\right) p\left({{\varvec{\theta}}}^{\mathrm{new}}\right)}{p\left({\varvec{d}}|{\varvec{\theta}}\right) p\left({\varvec{\theta}}\right)}\right).\end{array}$$
(3)

In addition to the fault parameters, the stress drop value and variance reduction (VR) defined by Eqs. (4)) and (5) were also calculated as supporting information:

$$ \Delta {\upsigma } = 2c\mu D/\sqrt A,$$
(4)
$$ {\text{VR }} = 100\left( {1 - {\varvec{r}}^{{\text{T}}} {\varvec{r}} / {\varvec{d}}^{{\text{T}}} {\varvec{d}}} \right), $$
(5)

where \(c\), \(\mu \), \(D\), and \(A\) denote geometrical factor, rigidity, slip amount, and fault area, respectively. Here, we assume that \(c\) and \(\mu \) are 0.5 and 30 GPa, respectively. The VR corresponds to an index that shows the degree of coincidence between the observed and calculated values. Thus, a high VR value indicates a high likelihood.

Parallel tempering

Using just a single Markov chain entails a significantly long mixing time to obtain a reliable posterior PDF. To accelerate the sampling efficiency to achieve real-time predictions, we introduced a parallel tempering technique (Geyer 1991; Jasra et al. 2007). Let \({\left\{{p}_{j}\left({\varvec{d}}|{\varvec{\theta}}\right)\right\}}_{j=\mathrm{1,2},\dots ,n}\) be a family of PDFs defined as follows:

$$\begin{array}{c}{p}_{j}\left({\varvec{d}}|{\varvec{\theta}}\right)={p\left({\varvec{d}}|{{\varvec{\theta}}}_{j}\right)}^{1/{T}_{j}},\end{array}$$
(6)

where subscript \(j\) is the “chain number” and \({T}_{j}\)(\(j=1, 2, \dots , n\)) are the temperatures that satisfy \(1={T}_{1}<{T}_{2}<\dots <{T}_{n}\). In this study, we used eight parallel chains (n = 8). Note that \({T}_{1}\) corresponds to the target PDF such that the functional form of the PDF becomes smoother with an increasing chain number or temperature. The parallel tempering technique involves the following steps:

Step 1: Obtain the next sample \({{\varvec{\theta}}}_{j}\) in each chain using the M–H method described in "Bayesian inversion" Section,

Step 2: Randomly select four temperature values without overlap (\({T}_{j} \;\mathrm{and}\; {T}_{{j}^{^{\prime}}};{T}_{k}\; \mathrm{and} \;{T}_{{k}^{^{\prime}}}\)),

Step 3: Exchange the current samples between \({T}_{j}\; \mathrm{and} \;{T}_{{j}^{^{\prime}}}\) with a probability, \(\beta \). Repeat this step for \({T}_{k} \;\mathrm{and}\; {T}_{{k}^{^{\prime}}}\).The exchange occurs with one pair per step. However, two pairs per step can be exchanged to promote exchange in real time. Exchange probability \(\beta \;\mathrm{between}\; {T}_{j} \;\mathrm{and}\; {T}_{{j}^{^{\prime}}}\) is defined as follows:

$$\begin{array}{c}\beta =min\left(1, \frac{{p\left({\varvec{d}}|{{\varvec{\theta}}}_{{j}^{^{\prime}}}\right)}^{1/{T}_{j}}{p\left({\varvec{d}}|{{\varvec{\theta}}}_{j}\right)}^{1/{T}_{{j}^{^{\prime}}}}}{{p\left({\varvec{d}}|{{\varvec{\theta}}}_{j}\right)}^{1/{T}_{j}}{p\left({\varvec{d}}|{{\varvec{\theta}}}_{{j}^{^{\prime}}}\right)}^{1/{T}_{{j}^{^{\prime}}}}} \right).\\\end{array}$$
(7)

Each temperature was set using a geometric progression (\({T}_{j}=100^{j-{1/7}}\)) based on the idea that \({\sigma }_{i}\) is multiplied by 10 at the maximum temperature.

Sampling flow for real-time use

To use the method described in "Parallel tempering" Section, we assume several hyperparameters in advance, i.e., the initial model sample (\({{\varvec{\theta}}}^{\mathrm{int}}\)), standard deviations of the likelihood functions (\({\sigma }_{EN}, {\sigma }_{U}\); Eq. (2)), maximum step size (\(\Delta {{\varvec{\theta}}}^{\mathrm{max}}\)), and the number of Markov chains. The values of the hyperparameters strongly depend on the size of the earthquake, coverage of the GNSS stations, and the distance from the fault to a GNSS station. Thus, automatic adjustment of the hyperparameters is essential for real-time predictions. For this purpose, an “automatic hyperparameter setting phase” was introduced before the main sampling phase to automatically determine the values of hyperparameters. Figure 1 shows an overview of the sampling flow including the automatic hyperparameter setting phase. The following are the details of each phase and its preparations.

Fig. 1
figure 1

Sampling flow of the new method, RUNE. NP denotes the “Nodal Plane” of the fault. Calculation time denotes the approximate calculation time at the end of the stage using a typical LINUX server

Initial parameter settings

In Japan, an earthquake early warning (EEW) can be issued by the Japan Meteorological Agency (JMA) based on the short-period seismometer data that cover the entire Japanese territory. Thus, the EEW was adopted as a priori information for the expected coseismic fault model as the initial value of the MCMC approach. Table 1 lists the settings of the initial values and search ranges for the fault parameters. For the fault area and slip amount assumption in the expected coseismic fault model, we used the scaling law proposed by Utsu (2001). As a priori information for the location of the fault, we set the prior PDFs for the latitude, longitude, and depth as the normal distribution based on the EEW information with pre-determined variances that depend on the fault length and width. There was no information on the fault mechanism for the timing of the EEW. Thus, possible fault geometries such as the fault strike, dip, and rake angles could be assumed. This information was derived and simplified from a tectonic stress pattern database at 10 and 40 km in depth, as reported in Terakawa and Matsu’ura (2009), whose data are identical to those in the current REGARD system (Kawamoto et al. 2017). Furthermore, for a more efficient unknown parameter search, only the results in the range of 0.2–21.2 MPa were accepted for the stress drop based on the empirical rule (Kanamori and Anderson 1975). These lower and upper limits were determined based on the stress drop value calculated with Eq. (4) using \([{LEN}_{M+1}, {WID}_{M+1},{SLIP}_{M-1}]\) and \([{LEN}_{M-1}, {WID}_{M-1}, {SLIP}_{M+1}]\) for the lower and upper limits, respectively. Here, \({LEN}_{M}, \;{WID}_{M},\;\mathrm{and} \;{SLIP}_{M}\) denote values based on the scaling law proposed by Utsu (2001). The scaling law can translate from M to the fault length (\({LEN}_{M}\)), width (\({WID}_{M}\)), and slip amount (\({SLIP}_{M}\)) of single rectangular faults. Here, \({LEN}_{M+1}\) and \({LEN}_{M-1}\) denote that the fault length expected from the scaling law is assumed to be the magnitude estimated by EEW plus one and minus one, respectively. Therefore, \([{LEN}_{M+1}, {WID}_{M+1},{SLIP}_{M-1}]\) and \([{LEN}_{M-1}, {WID}_{M-1}, {SLIP}_{M+1}]\) correspond to the smallest and largest stress drop values, respectively. The lower and upper limits of the stress drop do not depend on M, because the scaling law by Utsu (2001) adopts a uniform stress drop value. For the same reason, with the constraint on the stress drop value, only the samples that satisfy “fault length > fault width” were accepted based on the relationship between fault length and width (e.g., Wells and Coppersmith 1994).

Table 1 Initial parameter settings for the analysis. Maximum step sizes are adjusted from these values in the Burn-in phase

Stage 1: Auto setting phase

In the previous stage, we assumed the appropriate initial value of the coseismic fault model using the EEW issued by the JMA as well as empirical information. In the subsequent stage, we attempted to automatically determine the appropriate values of \({\sigma }_{EN}, {\sigma }_{U}\), and \(\Delta {{\varvec{\theta}}}^{\mathrm{max}}\). Based on the assumption in Eq. (2), the likelihood function can be simplified and used to derive the maximum likelihood estimates, i.e., \({\widehat{\sigma }}_{i} \left(i=EN,U\right),\) by setting \(\partial {L}_{i}/\partial {\sigma }_{i}=0\) and solving for \({\sigma }_{i}\). After substituting \({\widehat{\sigma }}_{EN}=\sqrt{{{{\varvec{r}}}_{EN}}^{\mathrm{T}}{{\varvec{r}}}_{EN}/2N} \mathrm\;{and} \;{\widehat{\sigma }}_{U}=\sqrt{{{{\varvec{r}}}_{U}}^{\mathrm{T}}{{\varvec{r}}}_{U}/N}\) into Eq. (2), the likelihood function can be described as follows (Dosso and Wilmut 2006; Dettmer et al. 2014):

$$\begin{array}{c}p\left({\varvec{d}}|{\varvec{\theta}}\right)={L}_{EN}\left({\varvec{\theta}}\right) {L}_{U}\left({\varvec{\theta}}\right)\propto {\text{exp}}\left(-\frac{2N}{2}{\mathrm{log}}_{\mathrm{e}}\left({{{\varvec{r}}}_{EN}}^{\mathrm{T}}{{\varvec{r}}}_{EN}\right)\right){\text{exp}}\left(-\frac{N}{2}{\mathrm{log}}_{\mathrm{e}}\left({{{\varvec{r}}}_{U}}^{\mathrm{T}}{{\varvec{r}}}_{U}\right)\right). \end{array}$$
(8)

In the auto setting phase, Eq. (8) is used as a likelihood function for sampling, while \({\widehat{\sigma }}_{i}\) is calculated. This procedure is equivalent to assuming that the estimation error is the residual mean square, allowing us to determine an implicit value of \({\sigma }_{i}\) in real time. To ensure that the setting does not only depend on the EEW and temperature of each chain, the \(\Delta {{\varvec{\theta}}}^{\mathrm{max}}\) values of all eight chains were automatically adjusted such that the acceptance rate is between 30 and 40% (Roberts and Rosenthal 2001). If the acceptance rate is smaller than 30%, \(\Delta {{\varvec{\theta}}}^{\mathrm{max}}\) performs constant multiplication at 10% decrease. Otherwise, if the acceptance rate exceeds 45%, \(\Delta {{\varvec{\theta}}}^{\mathrm{max}}\) performs constant multiplication at 5% increase. This auto-adjustment procedure occurs at every 1000 steps. Note that the ratio of \(\Delta {{\varvec{\theta}}}^{\mathrm{max}}\) between each fault parameter depends on the initial decision, because \(\Delta {{\varvec{\theta}}}^{\mathrm{max}}\) is adjusted by multiplying the whole unknown parameters. To evaluate the end of the auto setting phase, the following regulations were introduced: when the median value of the VR samples at every \(1\times {10}^{4}\) steps (i.e., “1 batch” in this study) is larger than 90%, or if 10 batches are sampled, the auto setting phase is terminated to move on to the next stage. We refer to the last batch of stage 1 in \({T}_{1}\) as the “seed batch” for the next stage (Fig. 1).

Stage 2: Main sampling phase

In stage 2, stable sampling of the posterior PDF was performed based on the information from stage 1. To obtain a stable sampling result, \({\sigma }_{i} \left(i=EN,U\right)\) was fixed to the sample median value of the seed batch, \({\widehat{\sigma }}_{i}^{\mathrm{medi}}\). The median and mode values for \({\varvec{\theta}}\) (\({{\varvec{\theta}}}^{\mathrm{medi}}\) and \({{\varvec{\theta}}}^{\mathrm{mode}}\)) of the “seed batch” were assigned to eight chains as the initial fault parameters in stage 2 (Fig. 1). For \(\Delta {{\varvec{\theta}}}^{\mathrm{max}}\), the optimization was continued based on the results of stage 1 until the end of the first batch in stage 2. After the first batch in stage 2, the \(\Delta {{\varvec{\theta}}}^{\mathrm{max}}\) value was fixed for the following batches. In general, because the result of the initial part of the chain strongly depends on the initial value of the unknown parameters, the chain eliminates these values (i.e., "Burn-in"). In our algorithm, we assumed that Burn-in occurs in stage 1 until the end of the initial batch in stage 2. Sampling was terminated when 100 batches (\(1\times {10}^{6}\) steps) were obtained. Thus, 99 batches obtained in this stage were used for rendering the posterior PDFs (Fig. 1).

Computation time

Typically, MCMCs consume a large amount of computational resources to increase the number of samples; thus, they are not suitable for real-time analysis. In this study, all the programs were coded in Fortran90, the memory access speed was increased using single precision for unnecessary components, and the code was parallelized using OpenMP (open multiprocessing) with multiple CPUs. For parallel computing, eight parallel chains were assigned to each thread and calculated with eight threads. The non-interfering loop calculations were also parallelized. The computation time depended on the computer performance, number of threads, amount of parallel tempering, number of MCMC samples, and the number of observation sites.

The computation time required for RUNE is 30 s when a Xeon Gold 6126 (2.6 GHz) processor is used with eight fixed parallel chains and \(1.1\times {10}^{6}\) samples (i.e., the case for a maximum number of samples in stage 1 and 2). As the number of observation points used in this study was 100 or 200 when using 8 threads, the calculation was completed in less than 30 s: 15 s for 100 stations and 18 s for 200 stations (Additional file 1: Figure S1).

Results

To evaluate the performance of the proposed algorithm, we applied it to actual RTK-GNSS displacement data, which recorded several past non-interplate large earthquakes, e.g., the 2016 Kumamoto (Mw 7.0) and 2019 Yamagata-Oki earthquakes (Mw 6.4). For these events, we used the actual RTK-GNSS displacement data estimated by the REGARD system. We also conducted 60 numerical simulations of the earthquakes of Mw 6.7–7.8, which occurred in the Sea of Japan.

2016 Kumamoto earthquake (Mw 7.0)

The 2016 Kumamoto earthquake (Mw 7.0) occurred at 16:25:05 UTC on April 15, 2016. REGARD was operated in real time during the earthquake to automatically estimate the single rectangular fault model. The estimated fault model was consistent with the post-earthquake results based on the Interferometric synthetic-aperture radar (InSAR) and GNSS data. Kawamoto et al. (2016), however, identified a problem that required a solution for the inversion component based on the occurrence of this event. In the current REGARD system, the maximum likelihood estimation was adopted to estimate the single rectangular fault model. To estimate the model, including the fault geometry, the REGARD system assumes the initial values for unknown parameters for the inversion analysis, because it is a nonlinear problem. Thus, the obtained result depends on these initial values. For the 2016 Kumamoto earthquake, REGARD assumed a nodal plane opposite to the actual direction based on the focal mechanism database. Due to this erroneous assumption, a long convergence time was required for this event. By applying RUNE to this large inland event, we evaluated its magnitude and characteristics, but determining the fault location and geometry was difficult. We used 200 stations near the epicenter for the analysis. We estimated the displacement at each observation point at 5 s intervals from actual 1 Hz RTK-GNSS time-series data (difference between the average of 1 min before the event and the average of the 20 s travel time window), and applied RUNE to the data independently. This procedure of repeated displacement estimation is similar to that of REGARD, and it is important for dealing with large earthquakes whose displacement evolves over time. On the other hand, the application presented in this paper is for relatively small events whose displacements do not evolve over time; however, we also used the repeat estimation for comparison with REGARD.

Figure 2a, b, d shows the results for 60 and 290 s after the origin time (here “time” indicates the time of the displacement data, not the time at which the result was obtained) (all marginal PDFs at 60 s: Additional file 1: Figure S2). Figure 2c shows a comparison of the time history of the estimated moment magnitude for three different sources. Our algorithm estimated a northwest dipping strike-slip fault along the Futagawa fault 60 s after the origin time, which is consistent with the estimation of the post-processed fault model (Yarai et al. 2016). In contrast, REGARD yielded a conjugate fault (Fig. 2b), which was northeast dipping because of the incorrect initial value assumption. The estimation condition did not change until 290 s after the origin time (Fig. 2c, d). REGARD yielded a reasonable fault model after 290 s from the origin time, which was similar to the results of RUNE. In the 2016 Kumamoto earthquake, the source time function was less than 30 s (Kubo et al. 2016). The passage of the large seismic wave around the epicenter also terminated in less than 60 s. The comprehensive permanent displacement pattern did not change in 60 s after the origin time. Thus, the resulting conjugate fault estimated by REGARD may reflect a strong dependency on the initial value. These results indicate the advantages of the algorithm developed in this study, especially the weaker initial value dependency than the maximum likelihood estimation for nonlinear problems.

Fig. 2
figure 2

Results of the 2016 Kumamoto earthquake. a Posterior PDFs at 60 s after the origin time. The ranges on the horizontal axis reflect the maximum search ranges. Inserted values in the upper right-hand corner denote the mean, median, and mode in the top, middle, and bottom positions, respectively. b, d Estimated coseismic fault model (mode value) at 60 and 290 s after the origin time, respectively. The black and white vectors denote the observations and calculations based on the coseismic fault model, respectively. The colors indicate the residual value between the calculation and observation of the vertical component. Focal mechanisms, in gray and black, represent the initial value for the analysis and the CMT solution issued by the JMA, respectively. The black rectangular fault model represents the coseismic fault model obtained by REGARD in real time. c Estimated time series of Mw at the 95% confidence interval. “EEW” shows the values from the Earthquake Early Warning systems issued by the JMA. The “maximum likelihood” is based on results using the actual REGARD record (Kawamoto et al. 2016)

2019 Yamagata-Oki earthquake (Mw 6.4)

The 2019 Yamagata-Oki earthquake (Mw 6.4) occurred at 14:22:20 UTC on June 18, 2019. This event was an intraplate earthquake that occurred off Yamagata Prefecture along the eastern margin of the Sea of Japan. The focal mechanism proposed by the JMA suggests that reverse fault motion occurred with a northeast to southwest extension axis (Fig. 3b). This earthquake triggered a tsunami, which was observed by tide gauges on the coast of the Sea of Japan (~ 100 mm; Earthquake Research Committee 2019). Furthermore, the REGARD system automatically estimated the permanent displacement and a fault model with southeast dipping (Fig. 3b). However, the width of the estimated fault was small (1.8 km), and the ratio of the fault length to the width was quite large. The estimation result strongly depends on the initial values; thus, it is difficult to determine how accurately this model was estimated. By applying RUNE to this event, we assessed whether our approach could quantify the estimation uncertainties of the fault parameters. We used 50 stations near the epicenter from the same dataset used for REGARD estimation, considering the low signal-to-noise (S/N) ratio.

Fig. 3
figure 3

Results for the 2019 Yamagata-Oki earthquake. a Posterior PDFs. The ranges on the horizontal axis reflect the maximum search ranges. Inserted values in the upper right-hand corner denote the mean, median, and mode in the top middle, and bottom positions, respectively. b Estimated coseismic fault model. The black and white vectors denote the observations and calculations based on the coseismic fault model, respectively. The colors indicate the residual value between the calculation and observation of the vertical component. Focal mechanisms, in gray and black, represent the initial value for the analysis and the CMT solution issued by the JMA, respectively. The red rectangular fault model represents the estimated coseismic fault model (median value). The black rectangular fault model represents the estimated coseismic fault model with the REGARD system. c Marginal PDFs of strike, rake, and dip angles. The colors indicate the normalized frequency. d Normalized frequency distribution of the estimated coseismic fault. The colors indicate the normalized frequency of the coseismic fault plane at each point

Figure 3 shows the estimated results (all marginal PDFs: Additional file 1: Figure S3). Posterior PDFs have multiple peaks in strike, dip, and rake angle, and large tails in length, width, and slip amount. The median values suggest a reverse fault with southeast dipping. Figure 3d shows the normalized fault location frequency, which is the distribution of each sample’s fault plane normalized by maximum frequency. The distribution location is close to the location of the centroid moment tensor solution estimated by the JMA and has a complex shape with the nodes in the northwest–southeast and northeast–southwest directions. Figure 3c shows the marginal PDFs between the dip, rake, and strike angles. Multiple peaks in each posterior PDF correspond to four clusters. These clusters correspond to a reverse fault with northwest dipping, a reverse fault with southeast dipping, and two strike-slip faults perpendicular to them. The same behaviors were observed even when the number of samples was increased.

These results suggest that some fault parameters such as the depth, dip, and fault length and width, were not well-constrained for this event. Using RUNE, we showed that the observed data did not have sufficient resolution to reject the possibility of the four models, and the reverse fault with southeast dipping was superior to the other models. Thus, it is possible to discuss the reliability of the estimated results from a more diverse perspective by acquiring not only variance reduction but also uncertainties in real time.

Simulated intraplate earthquakes in the Sea of Japan (Mw 6.7–7.8)

In this section, the detectability of intraplate earthquakes was evaluated by numerical simulations using RUNE. The target area is the eastern margin of the Sea of Japan, where numerous non-interplate tsunamigenic earthquakes have occurred, such as the 1940 Shakotan-Oki (Mw 7.5), 1964 Niigata (Mw 7.5), 1983 Middle Sea of Japan (Mw 7.9), and 1993 Hokkaido Nansei-Oki (Mw 7.8) earthquakes (Tanioka et al. 1995). A single rectangular fault model can successfully model a non-interplate earthquake whose geometry is difficult to assume in advance.

To obtain the input coseismic fault models, we adopted 60 fault models proposed by the Japanese government (MLIT, CAO, and MEXT 2014: Additional file 1: Table S1, Figure S4a). Some of the 60 assumed scenarios are composed of multiple rectangular faults (estimation was performed as a single rectangular fault). The range of the magnitude is Mw 6.7–7.8, which is larger than the 2019 Yamagata-Oki earthquake that occurred in the same area ("2019 Yamagata-Oki earthquake (Mw 6.4)" Section). The assumed fault models have the same mechanism of reverse fault slip as the 2019 Yamagata-Oki earthquake. The input data of RUNE include the displacement data and EEW. We calculated the expected coseismic permanent displacement for 50 GEONET stations surrounding the focal area based on Okada (1992) by adding the expected observation error via a Gaussian distribution with a standard deviation of 2 and 5 cm for the horizontal and vertical components, respectively. For setting the EEW information, we used the average value of the constituting fault centers as latitude and longitude, uniform 10 km as depth, and the closest value to the input in 0.5 intervals as magnitude. Then, we applied RUNE to the 60 cases independently.

Figure 4 shows the estimated results for the 60 cases. For better visualization, Fig. 4a, b shows odd and even case IDs, respectively. The red rectangular faults and blue contours indicate the median and the normalized frequency distribution, respectively. In almost all cases, the estimations were located around the assumed fault models. In the case of multiple rectangular faults’ inputs, the coseismic faults were estimated offshore as a single rectangular fault rather than assumed faults (F24 in Fig. 4b). However, the normalized frequency distribution varied depending on the arrangement of the observation stations. The uncertainty of the spread of the fault plane was larger for faults located offshore.

Fig. 4
figure 4

Results for the simulated earthquakes. a Results of odd IDs. b Results of even IDs. The red rectangular fault model represents the estimated coseismic fault model (median). The blue contours indicate the normalized frequency distribution of the estimated coseismic fault. The contour of 0.9 is the outermost one and is indicated by a bold line

Figure 5 shows the estimated Mw as a box plot, whose horizontal axis is the distance to the nearest observation station. The larger the distance to the nearest observation station, the larger the estimated range of Mw. The green line in Fig. 5 shows the envelopes of Mw required to cause a horizontal displacement of 5 cm (~ maximum displacement observed in the 2019 Yamagata-Oki earthquake) at the nearest observation station for six types of reverse fault mechanisms. In other words, these envelopes indicate that the larger the distance from the epicenter, the larger the Mw required to observe a horizontal displacement of 5 cm. Since the given simulated noise is uniform, the case closer to the envelope line suggests a smaller S/N ratio. Therefore, the smaller the S/N ratio of the observation data, the larger the estimated range of Mw.

Fig. 5
figure 5

Relationship between the distance from the epicenter to the nearest observation station and estimated Mw. Blue dots indicate the assumed Mw. Each box plot indicates the result of each fault ID. The yellow box indicates the result of the 2019 Yamagata-Oki earthquake. The green line indicates the theoretical envelope of Mw required to cause 5 cm horizontal displacement at the nearest station. The insert indicates the positional relationship between the station and the fault. We used the scaling law of Utsu (2001) to assume the size and calculated theoretical displacement according to Okada (1992). The small step in the green line near zero on the horizontal axis corresponds to a change in the displacement pattern at the ground surface, because the assumed fault has a finite size according to the scaling law

Table 2 shows the \({\sigma }_{i} \left(i=EN,U\right)\) values of the likelihood function fixed in RUNE stage 2 in each case. The \({\sigma }_{i}\) values are close to the noise of the simulated observation data: the standard deviations of Gaussian noises are 2.0 and 5.0 cm for the horizontal and vertical components, respectively. Since the same Green’s function (Okada 1992) was used to simulate the data and to estimate the fault models, there were no modeling errors. Their similarities indicate that the dynamic adjustment of hyperparameters in RUNE worked well. Table 2 also shows the median value of VR in each case. Although some cases have high VR values exceeding 95%, the estimated faults may not always reproduce the assumed fault model. For example, the F01 result whose VR is 97.72% cannot estimate the fault plane extending to the north direction, where there is no station (Fig. 4a). This result suggests that a model with higher VR may not always be a model with higher accuracy.

Table 2 VR and standard deviation of likelihood function in 60 simulations. VR values indicate the mode values

Discussion

Practical use of the estimated uncertainties for real-time purposes

In "Results" Section, the proposed method was applied to both past and synthetic intraplate events. The method provides a reliable coseismic fault model with the uncertainty of each parameter in real time. In this section, we discuss using our method with the obtained uncertainties of the fault model in a real-time purpose.

As mentioned in "Introduction" Section, the GSI and Tohoku University have developed the REGARD system for coseismic fault estimation in real time using GNSS data. One of the purposes of the REGARD system is to estimate an accurate moment magnitude, because GNSS provides unsaturated magnitude values for large events. This information is used by the JMA as supporting information when issuing tsunami warnings/advisories. The moment magnitude is the basic starting information to decide whether to issue a tsunami warning/advisory. Real-time GNSS data can provide a relatively accurate moment magnitude value despite the instability of the other parameters. The proposed method can also provide the PDF of the moment magnitude (Figs. 2, 3, Additional file 1: Figure S5). To avoid under/overestimation of the tsunami warning/advisory, information on the uncertainty of the moment magnitude is required.

Currently, the JMA only uses information on the moment magnitude from the REGARD system, but the GNSS can provide not only the moment magnitude value but also the finite fault model. In contrast, the onshore GNSS data cannot strictly distinguish between the fault slip amount and fault area. Figure 6 shows the correlation between the slip amount and fault area for all four events with lines indicating the stress drop values. All events show a clear trade-off between the slip amount and fault area, whereas the magnitude is relatively well-constrained. For example, the moment magnitude was stably estimated for the 2019 Yamagata-Oki event despite a relatively low VR value compared to the other events. The 2016 Kumamoto earthquake was also characterized by a trade-off between the VR value and moment magnitude. The range, however, was smaller than that for the other events. For the 2016 Kumamoto earthquake, station coverage was sufficient to estimate the fault model compared to the other offshore events. Fault area information is necessary to estimate an accurate initial sea surface distribution (Ohta et al. 2012; Tsushima and Ohta 2014; Tsushima et al., 2014). For practical real-time monitoring, a comparison with past large earthquakes in such figures to understand the trade-offs would be useful to validate the potential uncertainties in the fault area.

Fig. 6
figure 6

Marginal PDFs between the estimated fault area and slip amount, a without 60 simulations except for simulated F16 earthquake in the Sea of Japan and b with 60 simulations. The contours denote the normalized frequency of the marginal PDF between 0.1 and 0.9. The thick black line denotes the calculated moment magnitude. The thin dashed lines denote the stress drop values calculated using Eq. (4). Gray hatched areas denote the non-searchable areas constrained by the stress drop value

As mentioned in "2019 Yamagata-Oki earthquake (Mw 6.4)" Section, the posterior PDFs of the fault parameters also enable us to discuss the reliability of the estimated results. The VR value is required for the validation of the estimation reliability. However, the results of the numerical simulations in "Simulated intraplate earthquakes in the Sea of Japan (Mw 6.7–7.8)" Section suggest that the VR value may not always support accuracy. Therefore, discussing the results together with their uncertainties provides new evidence, unlike discussing the results only with the estimated values, depending on the initial value and its VR.

The results of REGARD, including the single rectangular fault model, were also adopted as sources to estimate tsunami inundation, and resulting damage. This system for tsunami inundation and resulting damage has been developed based on the results of the “The research group of real-time estimation for tsunami inundation” (Ohta et al. 2018). The system has been adopted as part of the Disaster Information Systems (DIS) operated by Japan’s Cabinet Office (Musa et al. 2018; Ohta et al. 2018) and will use the generated information for initial governmental response after tsunamigenic earthquakes. The quantitative uncertainties of the fault models are directly related to the uncertainties of the vertical seafloor deformation and the tsunami hazard (Additional file 1: Figure S6). The method developed in this study for the coseismic fault model has the potential to enhance tsunami prediction (i.e., by the addition of the uncertainty information).

In this study, we only focused on the estimation of the single rectangular fault model for moderate-sized earthquakes (~ Mw 7.5). It is still necessary to understand the intermediate large non-interplate earthquakes, such as inland and intraplate earthquakes, to obtain accurate real-time predictions. However, this simple model cannot explain more complex rupture processes, including inhomogeneous slip distribution. Future work should develop and implement a method to estimate the fault geometry and slip distribution simultaneously, as proposed by Minson et al. (2014a).

Origin of quantified uncertainties

In "Practical use of the estimated uncertainties for real-time purposes" Section, we discussed using the obtained uncertainties of the fault model with our method. The uncertainty of each parameter includes both the observation and model errors. The adjusted hyperparameters (\({\sigma }_{i} \left(i=EN,U\right)\) and \(\Delta {{\varvec{\theta}}}^{\mathrm{max}}\)), a priori information, and the number of stations may affect uncertainties.

First, we discuss the effect of the standard deviation of the likelihood function \({\sigma }_{i}\). To obtain the standard deviation in real time, we adopted Eq. (8), where the \({\sigma }_{i}\) values strongly depend on the stability and size of the displacements. Thus, \({\sigma }_{i}\) cannot be identical to the typical RTK-GNSS observation error; the difference between the analytical solution of Okada (1992) and the actual observation is used as \({\sigma }_{i}\) (Eq. (8)). Figure 5 shows the comparison between the estimated Mw of the 2019 Yamagata-Oki earthquake ("2019 Yamagata-Oki earthquake (Mw 6.4)" Section) and the results of the numerical simulations ("Simulated intraplate earthquakes in the Sea of Japan (Mw 6.7–7.8)" Section). The range of Mw of the 2019 Yamagata-Oki earthquake is larger, although the fixed \({\sigma }_{i}\) of the likelihood function is smaller ([EN, U] = [0.58 cm, 1.12 cm], Table 2), because in the 2019 Yamagata-Oki Earthquake, the S/N ratio was smaller, and/or there was non-Gaussian local noise at each station. Furthermore, kinematic time series after large earthquakes are affected by the dynamic rupture process and seismic wave passage. Adjustment of \({\sigma }_{i}\) with Eq. (8) can avoid overfitting of such data.

Second, we discuss the effect of the utilization of a priori information. RUNE has prior PDFs on the latitude, longitude, and depth, based on the EEW value ("Sampling flow for real-time use" Section). A normal distribution of the magnitude of the earthquake with a standard deviation is given for the epicenter (Table 1). This is the constraint that increases the reliability of the epicenter estimated from the seismic waveform data when the magnitude of the earthquake is small (the S/N ratio of GNSS is small or spatial coverage of the GNSS site is not good). In such an estimation, GNSS data do not have the resolution to converge the estimation to the assumed fault position beyond the a priori position constraint. Figure 7 shows the estimate results for the simulated earthquake F16 at Sea of Japan in different EEW positions. If the hypocenter location by the EEW is significantly different from actual one, the estimated fault location is likewise estimated in a location that slightly deviates from actual one, although the value of VR does not change significantly. Although MCMC has an advantage similar to that of grid search, the MCMC search depends on the likelihood function and the constraint. Therefore, it is difficult to quantify the variations in the parameters in one estimation and in those with a limited number of samples.

Fig. 7
figure 7

Results for the simulated earthquake F16 in different EEW positions. a, c, and e Posterior PDFs. The ranges on the horizontal axis reflect the maximum search ranges. Inserted values in the upper right hand corner denote the mean, median, and mode in the top, middle, and bottom positions, respectively. Solid red and black dashed lines denote the given and initial fault parameter, respectively. b, d, and f Estimated coseismic fault model. The black and white vectors denote the observations and calculations based on the coseismic fault model, respectively. The colors indicate the residual value between the calculation and observation of the vertical component. Focal mechanisms, in gray, represent the initial values for the analysis. The red and black rectangular fault models represent the estimated (median value) and given coseismic fault model, respectively. The blue contours indicate the normalized frequency distribution of the estimated coseismic fault

In RUNE, the prior PDF was designed by applying it to the actual data of the 2019 Yamagata-Oki earthquake, which was a small earthquake. One of the methods to solve the difficulties in search efficiency is the parallel tempering approach, which is also implemented in RUNE. To further increase the search efficiency, the parallel chains can be started from different initial values (latitude, longitude), and the number of parallel chains can be increased. Furthermore, although EEW data were used as a priori information in this study, the CMT solution based on the seismic data can be used without any change in the code. Moreover, although past estimation results with high VR were not applied to the time series data (the 2016 Kumamoto earthquake), they can be used as a priori information for stability.

Next, we discuss the effect of the number of stations. REGARD consists of three subsystems, but RUNE predicts the replacement of the fault model estimation subsystem, and the selection of observation points to be used is not included in the current algorithm. However, owing to the tendency of MCMC to have a small initial value dependency and the current policy of minimizing the a priori constraint of RUNE, the spatial arrangement of used observation points affects the search behavior. In the application to cases, where the S/N is small, if the observation point arrangement is adopted unnecessarily far from the fault position, the fault length may unrealistically increase to explain them (Fig. 4b and Table 2, fault ID: 48). Besides, when using Okada’s model (1992), the homogeneous and isotropic elastic half-space assumption becomes problematic when using GNSS sites very far from the actual focal area. The relationship between the observation point selection and search behavior requires a future systematic study.

We can also optimize RUNE for objectives other than those mentioned above, because the settings affect the estimation results. For example, we can fix the standard deviations of the likelihood function in stage 2 with an additional parameter, i.e., \({\sigma }_{i}=\mathrm{max}\left({\widehat{\sigma }}_{i}^{\mathrm{medi}},{\mathrm{Obs}.\mathrm{ err}}_{i}\right)\). This parameter is dependent on the observation error and can prevent overfitting. Depending on the problem, it may be possible to use this parameter to effectively and properly define the observation error. If the time scale associated with estimating the observation error is much larger (1 day to 1 year), we can apply our approach to post-seismic long-time scale events. Furthermore, we can utilize other a priori information as prior PDF easily (CMT solution, distribution of aftershocks). The algorithm itself can be applied to post hoc analysis without any changes in the code.

Finally, we discuss the contribution of the Green’s function. In this study, we adopted the conventional elastic half-space analytic solution (Okada 1992) as the Green’s function. The uncertainties of unknown parameters should contain both the observation and modeling errors. However, in this study, we chose this simple analytic solution as the Green’s function because of its relatively low computational costs compared with more realistic solutions, such as the layered spherical earth (Pollitz, 1996), three-dimensional finite element (FE) method (e.g., Kyriakopoulos et al. 2013), and elastic/viscoelastic FE method (e.g., Ichimura et al. 2016). Considering future improvements in computing ability and other factors, these approaches are promising for obtaining more reliable results in real time.

Conclusions

Uncertainties in earthquake models are significant indicators of model reliability. One way to estimate uncertainties in nonlinear problems is to use a full Bayesian inversion approach such as the MCMC. However, the MCMC is computationally expensive and the hyperparameters should be set up in advance via trial and error (initial model sample, standard deviations of the likelihood function, and maximum step size). To overcome these problems, a new method was developed to estimate the coseismic fault model in real time, combined with the uncertainties of the obtained fault model, using a full Bayesian inversion approach. Efficient sampling techniques were adopted using a parallel tempering method and a multistage approach. The first stage was only used to automatically estimate the optimal hyperparameter values for the second stage. In the second stage, the algorithm focused on sampling for the posterior PDFs. Using a standard 1U server, the calculation time was determined to be less than 30 s for 1 × 106 samples, which is ideal for use with a real-time system, such as REGARD.

The reliability of the method was evaluated using data from past earthquakes (the 2016 Kumamoto and 2019 Yamagata-Oki earthquakes) and 60 simulated earthquakes in the Sea of Japan. The results showed the advantage of the new method over the existing one for initial value dependence in nonlinear problems. In applications to data with a small S/N ratio, the estimated results suggested the possibility of using several conjugate fault models. There was a trade-off between the fault area and slip amount, especially for earthquakes that occur offshore. This suggests a lack of sensitivity for earthquakes generated offshore using only onshore GNSS data. These results imply that quantification of the uncertainty enables us to discuss the reliability of the fault model estimation in real time.

To apply the developed method to real earthquakes, discussion with decision makers is necessary, because the required information varies depending on current conditions. Awareness of the limitations of the estimated results is also required. Finally, the new method can be used as a tool to determine which fault parameters are not constrained based on the current network and how to optimize the new site for a more robust model estimation.