Skip to main content
Log in

Markov automata with multiple objectives

  • Published:
Formal Methods in System Design Aims and scope Submit manuscript

Abstract

Markov automata combine probabilistic branching, exponentially distributed delays and nondeterminism. This compositional variant of continuous-time Markov decision processes is used in reliability engineering, performance evaluation and stochastic scheduling. Their verification so far focused on single objectives such as (timed) reachability, and expected costs. In practice, often the objectives are mutually dependent and the aim is to reveal trade-offs. We present algorithms to analyze several objectives simultaneously and approximate Pareto curves. This includes, e.g., several (timed) reachability objectives, or various expected cost objectives. We also consider combinations thereof, such as on-time-within-budget objectives—which policies guarantee reaching a goal state within a deadline with at least probability p while keeping the allowed average costs below a threshold? We adopt existing approaches for classical Markov decision processes. The main challenge is to treat policies exploiting state residence times, even for untimed objectives. Experimental results show the feasibility and scalability of our approach.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

Notes

  1. Multiple outgoing Markovian transition could be reduced to a single Markovian transition by taking a weighted sum.

  2. Our construction is roughly inspired by a construction in [10, Sect. 6], where schedulers for MDPs with stochastic memory-updates are considered. Lifting the approach of [10] to Markov automata is not obvious.

  3. In the figure, \(A^{-}\) partly overlaps \(A\), i.e., the green area also belongs to \(A\).

  4. An \(\eta \)-approximation of \(A\subseteq \mathbb {R}^d\) is given by \(A^{-}, A^{+}\subseteq \mathbb {R}^d\) with \(A^{-}\subseteq A\subseteq A^{+}\) and for all \(\mathbf{p }\in A^{+}\) exists a \(\mathbf{q }\in A^{-}\) such that the distance between \(\mathbf{p }\) and \(\mathbf{q }\) is at most \(\eta \).

  5. We slightly extend the PRISM language in order to describe MAs.

  6. We considered PRISM 4.6 obtained from www.prismmodelchecker.org.

  7. We considered IMCA 1.6 obtained from https://github.com/buschko/imca.

  8. We adapt [30, Lemma G.2] to our notations from “Appendix C.4”.

  9. A buffer underrun occurs when the next package needs to be processed while the buffer is empty.

References

  1. Ash RB, Doléans-Dade C (2000) Probability and measure theory. Academic Press, New York

    MATH  Google Scholar 

  2. Ashok P, Butkova Y, Hermanns H, Kretínský J (2018) Continuous-time Markov decisions based on partial exploration. In: ATVA, LNCS, vol 11138. Springer, pp 317–334

  3. Ashok P, Chatterjee K, Daca P, Kretínský J, Meggendorfer T (2017) Value iteration for long-run average reward in Markov decision processes. In: CAV (1), LNCS, vol 10426. Springer, pp 201–221

  4. Baier C, Bertrand N, Dubslaff C, Gburek D, Sankur O (2018) Stochastic shortest paths and weight-bounded properties in Markov decision processes. In: LICS. ACM, pp 86–94

  5. Baier C, Dubslaff C, Klüppelholz S (2014) Trade-off analysis meets probabilistic model checking. In: CSL-LICS. ACM, pp 1:1–1:10

  6. Baier C, Klein J, Leuschner L, Parker D, Wunderlich S (2017) Ensuring the reliability of your model checker: interval iteration for Markov decision processes. In: CAV (1), LNCS, vol 10426. Springer, pp 160–180

  7. Basset N, Kwiatkowska MZ, Topcu U, Wiltsche C (2015) Strategy synthesis for stochastic games with multiple long-run objectives. In: Proceedings of of TACAS, LNCS, vol 9035. Springer, pp 256–271

  8. Boudali H, Crouzen P, Stoelinga M (2010) A rigorous, compositional, and extensible framework for dynamic fault tree analysis. IEEE Trans Dependable Secur Comput 7(2):128–143

    Article  Google Scholar 

  9. Bozzano M, Cimatti A, Katoen JP, Nguyen VY, Noll T, Roveri M (2011) Safety, dependability and performance analysis of extended AADL models. Comput J 54(5):754–775

    Article  Google Scholar 

  10. Brázdil T, Brozek V, Chatterjee K, Forejt V, Kucera A (2014) Markov decision processes with multiple long-run average objectives. LMCS 10(1):1156

    MathSciNet  MATH  Google Scholar 

  11. Brázdil T, Chatterjee K, Forejt V, Kucera A (2017) Trading performance for stability in Markov decision processes. J Comput Syst Sci 84:144–170

    Article  MathSciNet  MATH  Google Scholar 

  12. Bruno JL, Downey PJ, Frederickson GN (1981) Sequencing tasks with exponential service times to minimize the expected flow time or makespan. J ACM 28(1):100–113

    Article  MathSciNet  MATH  Google Scholar 

  13. Bruyère V, Filiot E, Randour M, Raskin JF (2014) Meet your expectations with guarantees: beyond worst-case synthesis in quantitative games. In: Proceedings of STACS, LIPIcs, vol 25. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, pp 199–213

  14. Butkova Y, Fox G (2019) Optimal time-bounded reachability analysis for concurrent systems. In: TACAS (2), LNCS, vol 11428. Springer, pp 191–208

  15. Butkova Y, Hatefi H, Hermanns H, Krcál J (2015) Optimal continuous time Markov decisions. In: ATVA, LNCS, vol 9364. Springer, pp 166–182

  16. Butkova Y, Wimmer R, Hermanns H (2017) Long-run rewards for Markov automata. In: Proceedings of TACAS, LNCS. Springer https://link.springer.com/chapter/10.1007%2F978-3-662-54580-5_11

  17. Chatterjee K, Henzinger M (2011) Faster and dynamic algorithms for maximal end-component decomposition and related graph problems in probabilistic verification. In: Proceedings of SODA, pp 1318–1336

  18. Chen T, Forejt V, Kwiatkowska MZ, Simaitis A, Wiltsche C (2013) On stochastic games with multiple objectives. In: Proceedings of MFCS, LNCS, vol 8087. Springer, pp 266–277

  19. Coste N, Hermanns H, Lantreibecq E, Serwe W (2009) Towards performance prediction of compositional models in industrial GALS designs. In: Proceedings of CAV, LNCS, vol 5643. Springer, pp 204–218

  20. David A, Jensen PG, Larsen KG, Legay A, Lime D, Sørensen MG, Taankvist JH (2014) On time with minimal expected cost! In: Proceedings of ATVA, LNCS, vol 8837. Springer, pp 129–145

  21. de Alfaro L (1997) Formal verification of probabilistic systems. Ph.D. thesis, Stanford University

  22. Dehnert C, Junges S, Katoen JP, Volk M (2017) A storm is coming: a modern probabilistic model checker. In: Proceedings of CAV

  23. Delgrange F, Katoen J, Quatmann T, Randour M (2020) Simple strategies in multi-objective MDPs. In: TACAS (1), LNCS, vol 12078. Springer, pp 346–364

  24. Deng Y, Hennessy M (2013) On the semantics of Markov automata. Inf Comput 222:139–168

    Article  MathSciNet  MATH  Google Scholar 

  25. Eisentraut C, Hermanns H, Katoen JP, Zhang L (2013) A semantics for every GSPN. In: Petri Nets, LNCS, vol 7927. Springer, pp 90–109

  26. Eisentraut C, Hermanns H, Zhang L (2010) On probabilistic automata in continuous time. In: Proceedings of LICS. IEEE CS, pp 342–351

  27. Etessami K, Kwiatkowska MZ, Vardi MY, Yannakakis M (2008) Multi-objective model checking of Markov decision processes. LMCS 4(4):1–21

    MathSciNet  MATH  Google Scholar 

  28. Forejt V, Kwiatkowska M, Parker D (2012) Pareto curves for probabilistic model checking. In: Proceedings of ATVA, LNCS, vol 7561. Springer, pp 317–332

  29. Forějt V, Kwiatkowska MZ, Norman G, Parker D, Qu H ()20119 Quantitative multi-objective verification for probabilistic systems. In: Proceedings of TACAS, LNCS, vol 6605. Springer, pp 112–127

  30. Guck D, Hatefi H, Hermanns H, Katoen JP, Timmer M (2014) Analysis of timed and long-run objectives for Markov automata. LMCS 10(3):943

    MathSciNet  MATH  Google Scholar 

  31. Guck D, Timmer M, Hatefi H, Ruijters E, Stoelinga M (2014) Modelling and analysis of Markov reward automata. In: Proceedings of ATVA, LNCS, vol 8837. Springer, pp 168–184

  32. Haddad S, Monmege B (2014) Reachability in MDPs: refining convergence of value iteration. In: RP, LNCS, vol 8762. Springer, pp 125–137

  33. Hartmanns A, Junges S, Katoen J, Quatmann T (2020) Multi-cost bounded tradeoff analysis in MDP. J Autom Reason 64(7):1483–1522

    Article  MathSciNet  MATH  Google Scholar 

  34. Hatefi H, Braitling B, Wimmer R, Fioriti LMF, Hermanns H, Becker B (2015) Cost vs. time in stochastic games and Markov automata. In: Proceedings of SETTA, LNCS, vol 9409. Springer, pp 19–34

  35. Hatefi H, Hermanns H (2012) Model checking algorithms for Markov automata. ECEASST, vol 53. http://journal.ub.tu-berlin.de/eceasst/article/view/783

  36. Hensel C, Junges S, Katoen J, Quatmann T, Volk M (2020) The probabilistic model checker Storm. CoRR abs/2002.07080

  37. Junges S, Jansen N, Dehnert C, Topcu U, Katoen JP (2016) Safety-constrained reinforcement learning for MDPs. In: Proceedings of TACAS, LNCS, vol 9636. Springer, pp 130–146

  38. Katoen JP, Wu H (2016) Probabilistic model checking for uncertain scenario-aware data flow. ACM Trans Embedded Comput Syst 22(1):15:1–15:27

    Google Scholar 

  39. Kwiatkowska M, Norman G, Parker D (2011) Prism 4.0: verification of probabilistic real-time systems. In: Proceedings of CAV, LNCS, vol 6806. Springer, pp 585–591

  40. Neuhäußer MR (2010) Model checking nondeterministic and randomly timed systems. Ph.D. thesis, RWTH Aachen University

  41. Neuhäußer MR, Stoelinga M, Katoen JP (2009) Delayed nondeterminism in continuous-time Markov decision processes. In: Proceedings of FOSSACS, LNCS, vol 5504. Springer, pp 364–379

  42. Pnueli A, Zuck L (1986) Verification of multiprocess probabilistic protocols. Distrib. Comput. 1(1):53–72

    Article  MATH  Google Scholar 

  43. Puterman ML (1994) Markov decision processes: discrete stochastic dynamic programming. Wiley, New York

    Book  MATH  Google Scholar 

  44. Quatmann T, Junges S, Katoen J (2017) Markov automata with multiple objectives. In: CAV (1), LNCS, vol 10426. Springer, pp 140–159

  45. Quatmann T, Junges S, Katoen J (2020) Markov automata with multiple objectives: supplemental material. Zenodo. https://doi.org/10.5281/zenodo.4298642

    Article  MATH  Google Scholar 

  46. Randour M, Raskin JF, Sankur O (2015) Variations on the stochastic shortest path problem. In: Proceedings of VMCAI, LNCS, vol 8931. Springer, pp 1–18

  47. Roijers DM, Vamplew P, Whiteson S, Dazeley R (2013) A survey of multi-objective sequential decision-making. J Artif Intell Res 48:67–113

    Article  MathSciNet  MATH  Google Scholar 

  48. Srinivasan MM (1991) Nondeterministic polling systems. Manag Sci 37(6):667–681

    Article  MATH  Google Scholar 

  49. Teichteil-Königsbuch F (2012) Path-constrained Markov decision processes: bridging the gap between probabilistic model-checking and decision-theoretic planning. In: Proceedings of ECAI, frontiers in AI and applications, vol 242. IOS Press, pp 744–749

  50. Timmer M, Katoen JP, van de Pol J, Stoelinga M (2012) Efficient modelling and generation of Markov automata. In: Proceedings of CONCUR, LNCS, vol 7454. Springer, pp 364–379

  51. Volk M, Junges S, Katoen J (2018) Fast dynamic fault tree analysis by model checking techniques. IEEE Trans Ind Inform 14(1):370–379

    Article  Google Scholar 

Download references

Acknowledgements

We thank the anonymous reviewers for their detailed feedback on an earlier version of this draft. This work has been supported by the DFG RTG 2236 “UnRAVeL”. S. Junges was supported in part by NSF grants 1545126 (VeHICaL) and 1646208, by the DARPA Assured Autonomy program and the DARPA SDCPS Program (Contract FA8750-20-C-0156), by Berkeley Deep Drive, and by Toyota under the iCyPhy center.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sebastian Junges.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

A Proofs about sets of achievable points

Let \(\mathcal {M}= (S, Act , \rightarrow , {s_{0}}, (\rho _{1}, \!\dots \!, \rho _{\ell }) )\) be an MA and \(\sigma _1, \sigma _2 \in \text {GM}^{}\) be two schedulers for \(\mathcal {M}\). Further let \(w_1 \in [0,1]\) and \(w_2 = 1-w_1 \in [0,1]\). The proof of Proposition 1 considers the scheduler \(\sigma ^w \in \text {GM}^{}\), where for a path \(\pi = s_{0} \xrightarrow {\kappa _{0}} \dots {\xrightarrow {\kappa _{n-1}} s_{n}}\in { FPaths ^{}}\) and action \(\alpha \in Act \) we have

$$\begin{aligned} \sigma ^w(\pi , \alpha ) = \frac{\sum _{i=1}^2 \left( w_i \cdot \sigma _i(\pi , \alpha ) \cdot \prod _{j=0}^{n-1} \sigma _i( pref (\pi , j), \alpha (\kappa _{j})) \right) }{\sum _{i=1}^2 \left( w_i \cdot \prod _{j=0}^{n-1} \sigma _i( pref (\pi , j), \alpha (\kappa _{j})) \right) } \end{aligned}$$

We now show the following lemma.

Lemma 8

For \(\mathcal {M}\), \(\sigma _1, \sigma _2, w_1, w_2\), and \(\sigma \) as above and arbitrary measurable set \(\varPi \subseteq { IPaths ^{}}{}\) we have

$$\begin{aligned} \text {Pr}^{\mathcal {M}}_{\sigma ^w}(\varPi ) = w_1 \cdot \text {Pr}^{\mathcal {M}}_{\sigma _1}(\varPi ) + w_2 \cdot \text {Pr}^{\mathcal {M}}_{\sigma _2}(\varPi ). \end{aligned}$$

To show the Lemma, we fix a time-abstract path \(\hat{\pi }= s_0 \xrightarrow {\alpha _{0}} \dots \xrightarrow {\alpha _{n-1}} s_n\) of \(\mathcal {M}\) and show that the claim holds for the cylinder set \( Cyl (\varPi )\) of some measurable \(\varPi \subseteq \{ \pi \in { FPaths ^{}}{} \mid \text {ta}(\pi ) = \hat{\pi }\}\). The lemma also follows for arbitrary measurable sets as these can be described via unions and complements of such cylinder sets.

We define the scheduler \(\sigma _{\hat{\pi }}\) where for \(\pi \in { FPaths ^{}}{}\) and \(\alpha \in Act \) we set

$$\begin{aligned} \sigma _{\hat{\pi }}(\pi , \alpha ) = {\left\{ \begin{array}{ll} 1 &{} \text {if } \exists \, j< n:\, \text {ta}(\pi ) = pref (\hat{\pi }, j) \text { and } \alpha = \alpha _j \\ 0 &{} \text {if } \exists \, j < n:\, \text {ta}(\pi ) = pref (\hat{\pi }, j) \text { and } \alpha \ne \alpha _j \\ \frac{1}{| Act ( last (\pi ))|} &{} \text {otherwise}. \end{array}\right. } \end{aligned}$$

On a path whose time abstraction is a proper prefix of \(\hat{\pi }\), \(\sigma _{\hat{\pi }}\) will choose exactly the action given in \(\hat{\pi }\). In other cases, the choice is arbitrary (for simplicity, we picked a uniform distribution over available actions). We first show two auxiliary lemmas.

Lemma 9

For a scheduler \(\sigma \in \text {GM}^{}{}\) and \(\hat{\pi }\), \(\varPi \subseteq \{ \pi \in { FPaths ^{}}{} \mid \text {ta}(\pi ) = \hat{\pi }\}\), and \(\sigma _{\hat{\pi }}\) as above we have

$$\begin{aligned} \text {Pr}^{\mathcal {M}}_{\sigma }(\varPi ) = \int _{\pi \in \varPi } \Big (\prod _{j=0}^{n-1} \sigma ( pref (\pi , j), \alpha _j)\Big ) \,\text {d}\text {Pr}^{\mathcal {M}}_{\sigma _{\hat{\pi }}}(\pi ). \end{aligned}$$

Proof

The proof is by induction over the length n of \(\hat{\pi }= s_0 \xrightarrow {\alpha _{0}} \dots \xrightarrow {\alpha _{n-1}} s_n\). If \(n=0\) we have either \(\varPi = \{s_0\}\) or \(\varPi = \emptyset \) and thus either \( Cyl (\varPi ) = { IPaths ^{}}{}\) or \( Cyl (\varPi ) = \emptyset \). The lemma follows immediately in both cases. Now assume that Lemma 9 holds for \(\varPi ' = \{ pref (\pi , n-1) \mid \pi \in \varPi \}\), i.e., for paths of length \(n-1\). Notice that for \(\pi ' \in \varPi '\) we have \( last (\pi ') = s_{n-1}\).

\(\underline{Case\, s_{n-1} \in \text {PS}:}\)

$$\begin{aligned} \text {Pr}^{\mathcal {M}}_{\sigma }(\varPi )&= \int _{\pi ' \in \varPi '} \sigma (\pi ',\alpha _{n-1}) \cdot \mathbf{P} (s, \alpha _{n-1}, s') \,\text {d}\text {Pr}^{\mathcal {M}}_{\sigma }(\pi ') \\&= \int _{\pi ' \in \varPi '} \sigma (\pi ',\alpha _{n-1}) \cdot \mathbf{P} (s, \alpha _{n-1}, s') \cdot \Big (\prod _{j=0}^{n-2} \sigma ( pref (\pi ', j), \alpha _j)\Big ) \,\text {d}\text {Pr}^{\mathcal {M}}_{\sigma _{\hat{\pi }}}(\pi ') \\&= \int _{\pi ' \in \varPi '} \Big (\prod _{j=0}^{n-1} \sigma ( pref (\pi ', j), \alpha _j)\Big ) \cdot \mathbf{P} (s, \alpha _{n-1}, s') \,\text {d}\text {Pr}^{\mathcal {M}}_{\sigma _{\hat{\pi }}}(\pi ') \\&= \int _{\pi ' \in \varPi '} \Big (\prod _{j=0}^{n-1} \sigma ( pref (\pi ', j), \alpha _j)\Big ) \cdot \underbrace{\sigma _{\hat{\pi }}(\pi ',\alpha _{n-1})}_{=1} \cdot \mathbf{P} (s, \alpha _{n-1}, s') \,\text {d}\text {Pr}^{\mathcal {M}}_{\sigma _{\hat{\pi }}}(\pi ') \\&= \int _{\pi \in \varPi } \Big (\prod _{j=0}^{n-1} \sigma ( pref (\pi , j), \alpha _j)\Big ) \,\text {d}\text {Pr}^{\mathcal {M}}_{\sigma _{\hat{\pi }}}(\pi ). \end{aligned}$$

In the last step we use that for \(\pi ' \in \varPi '\) we have \(\pi = \pi ' \xrightarrow {\alpha _{n-1}} s_n \in \varPi \).

\(\underline{Case \, s_{n-1} \in \text {MS}:}\) For \(\pi ' \in \varPi '\) let \(T_{\pi '} = \{ (t, \alpha _{n-1}, s_n) \mid \pi ' \xrightarrow {t} s_n \in \varPi \}\). We note that \(\alpha _{n-1} = \bot \), \(\sigma '(\pi ',\bot ) = 1\), and that the probability of the transition step \(\text {Pr}^{ Steps }_{\sigma , \pi }(T_{\pi '})\) does not depend on \(\sigma \) since \(s_{n-1} \in \text {MS}\). We get:

$$\begin{aligned} \text {Pr}^{\mathcal {M}}_{\sigma }(\varPi )&= \int _{\pi ' \in \varPi '} \text {Pr}^{ Steps }_{\sigma , \pi '}(T_{\pi '}) \,\text {d}\text {Pr}^{\mathcal {M}}_{\sigma }(\pi ') \\&= \int _{\pi ' \in \varPi '} \text {Pr}^{ Steps }_{\sigma , \pi '}(T_{\pi '}) \cdot \Big (\prod _{j=0}^{n-2} \sigma ( pref (\pi ', j), \alpha _j)\Big ) \,\text {d}\text {Pr}^{\mathcal {M}}_{\sigma _{\hat{\pi }}}(\pi ')\\&= \int _{\pi ' \in \varPi '} \Big (\prod _{j=0}^{n-1} \sigma ( pref (\pi ', j), \alpha _j)\Big ) \cdot \text {Pr}^{ Steps }_{\sigma _{\hat{\pi }}, \pi }(T_{\pi '}) \,\text {d}\text {Pr}^{\mathcal {M}}_{\sigma _{\hat{\pi }}}(\pi ')\\&= \int _{\pi \in \varPi } \Big (\prod _{j=0}^{n-1} \sigma ( pref (\pi , j), \alpha _j)\Big ) \,\text {d}\text {Pr}^{\mathcal {M}}_{\sigma _{\hat{\pi }}}(\pi ). \end{aligned}$$

\(\square \)

Lemma 10

For \(\sigma _1, \sigma _2, \sigma ^w\) as above and \(\pi = s_{0} \xrightarrow {\kappa _{0}} \dots {\xrightarrow {\kappa _{n-1}} s_{n}}\in { FPaths ^{}}{}\) we have

$$\begin{aligned} \prod _{j=0}^{n-1} \sigma ^w( pref (\pi , j), \alpha (\kappa _{j})) = \sum _{i=1}^2 \left( w_i \cdot \prod _{j=0}^{n-1} \sigma _i( pref (\pi , j), \alpha (\kappa _{j})) \right) . \end{aligned}$$

Proof

$$\begin{aligned} \prod _{j=0}^{n-1} \sigma ^w( pref (\pi , j), \alpha (\kappa _{j}))&= \prod _{j=0}^{n-1} \frac{\sum _{i=1}^2 \left( w_i \cdot \prod _{k=0}^{j} \sigma _i( pref (\pi , k), \alpha (\kappa _{k})) \right) }{\sum _{i=1}^2 \left( w_i \cdot \prod _{k=0}^{j-1} \sigma _i( pref (\pi , j), \alpha (\kappa _{j})) \right) }\\&= \frac{\sum _{i=1}^2 \left( w_i \cdot \prod _{k=0}^{n-1} \sigma _i( pref (\pi , k), \alpha (\kappa _{k})) \right) }{\sum _{i=1}^2 \left( w_i \cdot \prod _{k=0}^{0-1} \sigma _i( pref (\pi , j), \alpha (\kappa _{j})) \right) }\\&= \sum _{i=1}^2 \left( w_i \cdot \prod _{j=0}^{n-1} \sigma _i( pref (\pi , j), \alpha (\kappa _{j})) \right) \end{aligned}$$

Proof

(of Lemma 8) Using the auxiliary Lemmas 9 and 10, we can prove Lemma 8 as follows:

$$\begin{aligned} \text {Pr}^{\mathcal {M}}_{\sigma ^w}(\varPi ) \overset{\text {Lem.}\,9}{=}&\int _{\pi \in \varPi } \Big (\prod _{j=0}^{n-1} \sigma ^w( pref (\pi , j), \alpha _j)\Big ) \,\text {d}\text {Pr}^{\mathcal {M}}_{\sigma ^w_{\hat{\pi }}}(\pi )\\ \overset{\text {Lem.}\,10}{=}&\int _{\pi \in \varPi } \sum _{i=1}^2 \left( w_i \cdot \prod _{j=0}^{n-1} \sigma _i( pref (\pi , j), \alpha _j) \right) \,\text {d}\text {Pr}^{\mathcal {M}}_{\sigma ^w_{\hat{\pi }}}(\pi )\\ {=}&\sum _{i=1}^2 \left( w_i \cdot \int _{\pi \in \varPi } \cdot \prod _{j=0}^{n-1} \sigma _i( pref (\pi , j), \alpha _j) \,\text {d}\text {Pr}^{\mathcal {M}}_{\sigma ^w_{\hat{\pi }}}(\pi )\right) \\ \overset{\text {Lem.}\,9}{=}&\sum _{i=1}^2 \left( w_i \cdot \text {Pr}^{\mathcal {M}}_{\sigma _i}(\varPi ) \right) . \end{aligned}$$

\(\square \)

B Proofs for expected reward

1.1 B.1 Proof of Lemma 2

Lemma 2

For MA \(\mathcal {M}= (S, Act , \rightarrow , {s_{0}}, (\rho _{1}, \!\dots \!, \rho _{\ell }) )\) with \(G\subseteq S\), \(\sigma \in \text {GM}^{}\), and reward function \(\rho _{}\) it holds that

$$\begin{aligned} \lim _{n\rightarrow \infty } \mathrm {ER}^{\mathcal {M}}_{\sigma }(\rho _{}, {\varPi _{G}^{n}}) = \mathrm {ER}^{\mathcal {M}}_{\sigma }(\rho _{}, G_{}). \end{aligned}$$

Furthermore, any reward function \(\rho _{}^{\mathcal {D}}\) for \({\mathcal {M}_\mathcal {D}}\) satisfies

$$\begin{aligned} \lim _{n\rightarrow \infty } \mathrm {ER}^{{\mathcal {M}_\mathcal {D}}}_{{\text {ta}(\sigma )}}(\rho _{}^{\mathcal {D}}, {\varPi _{G}^{n}}) = \mathrm {ER}^{{\mathcal {M}_\mathcal {D}}}_{{\text {ta}(\sigma )}}(\rho _{}^{\mathcal {D}}, G_{}). \end{aligned}$$

Proof

We show the first claim. The second claim follows analogously. For each \(n \ge 0\), consider the function \(f_n :{ IPaths ^{\mathcal {M}}} \rightarrow \mathbb {R}_{\ge 0}\) given by

for every path \(\pi = s_{0} \xrightarrow {\kappa _{0}} s_{1} \xrightarrow {\kappa _{1}} \dots \in { IPaths ^{\mathcal {M}}}\). Intuitively, \(f_n(\pi )\) is the reward collected on \(\pi \) within the first n steps and only up to the first visit of \(G\). This allows us to express the expected reward collected along the paths of \({\varPi _{G}^{n}}\) as

It holds that which is a direct consequence from the definition of the reward of \(\pi \) up to \(G\) (cf. Sect. 2.2.3). Furthermore, note that the sequence of functions \(f_0, f_1, \dots \) is non-decreasing, i.e., we have \(f_n(\pi ) \le f_{n+1}(\pi )\) for all \(n\ge 0\) and \(\pi \in { IPaths ^{\mathcal {M}}}\). By applying the monotone convergence theorem [1] we obtain

\(\square \)

The next step is to show that the expected reward collected along the paths of \({\varPi _{G}^{n}}\) coincides for \(\mathcal {M}\) under \(\sigma \) and \({\mathcal {M}_\mathcal {D}}\) under \({\text {ta}(\sigma )}\).

1.2 B.2 Proof of Lemma 3

Lemma 3

Let \(\rho _{}\) be some reward function of \(\mathcal {M}\) and let \(\rho _{}^{\mathcal {D}}\) be its counterpart for \({\mathcal {M}_\mathcal {D}}\). Let \(\mathcal {M}= (S, Act , \rightarrow , {s_{0}}, (\rho _{1}, \!\dots \!, \rho _{\ell }) )\) be an MA with \(G\subseteq S\) and \(\sigma \in \text {GM}^{}\). For all \(G\subseteq S\) and \(n\ge 0\) it holds that

$$\begin{aligned} \mathrm {ER}^{\mathcal {M}}_{\sigma }(\rho _{}, {\varPi _{G}^{n}}) = \mathrm {ER}^{{\mathcal {M}_\mathcal {D}}}_{{\text {ta}(\sigma )}}(\rho _{}^{\mathcal {D}}, {\varPi _{G}^{n}}) . \end{aligned}$$

We detail the terms (1) and (2) from the proof of Lemma 3 separately.

Term (1): Let \({\varLambda ^{\le n}_{G}}= \{\hat{\pi }\in {\varPi _{G}^{n+1}} \mid |\hat{\pi }| \le n \}\) be the paths in \({\varPi _{G}^{n+1}}\) of length at most n. We have \({\varLambda ^{\le n}_{G}}\subseteq {\varPi _{G}^{n}}\) and every path in \({\varLambda ^{\le n}_{G}}\) visits a state in \(G\). Correspondingly, \({\varLambda ^{= n}_{\lnot G}}= {\varPi _{G}^{n}}{\setminus } {\varLambda ^{\le n}_{G}}\) is the set of time-abstract paths of length n that do not visit a state in \(G\). Hence, the paths in \({\varPi _{G}^{n+1}}\) with length \(n+1\) have a prefix in \({\varLambda ^{= n}_{\lnot G}}\).

The set \({\varPi _{G}^{n+1}}\) is partitioned such that

The reward obtained within the first n steps is independent of the \((n+1)\)-th transition. To show this formally, we fix a path \(\hat{\pi }' \in {\varLambda ^{= n}_{\lnot G}}\) with \( last (\hat{\pi }') = s\) and derive

(4)

With the above-mentioned partition of the set \({\varPi _{G}^{n+1}}\), it follows that the expected reward obtained within the first n steps is given by

(5)

Term (2): For the expected reward obtained in step \(n+1\), consider a path \(\hat{\pi }= \hat{\pi }' \xrightarrow {\alpha _{}} s' \in {\varPi _{G}^{n+1}}\) such that \( |\hat{\pi }'| = n\) and \( last (\hat{\pi }') = s\).

  • If \(s\in \text {MS}\), we have \(\hat{\pi }= \hat{\pi }' \xrightarrow {\bot } s'\). It follows that

    $$\begin{aligned}&\int _{\pi = \pi '\xrightarrow {t} s' \in \langle {\hat{\pi }} \rangle } \rho _{}(s) \cdot t+ \rho _{}(s, \bot ) \,\text {d}\text {Pr}^{\mathcal {M}}_{\sigma }(\pi ) \nonumber \\&\quad = \int _{\begin{array}{c} \pi = \pi '\xrightarrow {t} s' \in \langle {\hat{\pi }} \rangle \end{array}} \rho _{}(s) \cdot t\,\text {d}\text {Pr}^{\mathcal {M}}_{\sigma }(\pi ) + \int _{\pi \in \langle {\hat{\pi }} \rangle } \rho _{}(s, \bot ) \,\text {d}\text {Pr}^{\mathcal {M}}_{\sigma }(\pi ) \nonumber \\&\quad = \rho _{}(s) \cdot \int _{\pi ' \in \langle {\hat{\pi }'} \rangle } \int _{0}^{\infty } t\cdot {\text {E}(s)} \cdot e^{-{\text {E}(s)} t} \cdot \mathbf{P} (s, \bot , s') \,\text {d}t\,\text {d}\text {Pr}^{\mathcal {M}}_{\sigma }(\pi ') \nonumber \\&\qquad + \rho _{}(s, \bot ) \cdot \text {Pr}^{\mathcal {M}}_{\sigma }(\langle {\hat{\pi }} \rangle ) \nonumber \\&\quad = \frac{\rho _{}(s)}{{\text {E}(s)}} \cdot \text {Pr}^{\mathcal {M}}_{\sigma }(\langle {\hat{\pi }} \rangle ) + \rho _{}(s, \bot ) \cdot \text {Pr}^{\mathcal {M}}_{\sigma }(\langle {\hat{\pi }} \rangle ) \nonumber \\&\quad = \rho _{}^{\mathcal {D}}(s, \bot ) \cdot \text {Pr}^{\mathcal {M}}_{\sigma }(\langle {\hat{\pi }} \rangle ) \overset{Lem.\,1}{=} \rho _{}^{\mathcal {D}}(s, \bot ) \cdot \text {Pr}^{{\mathcal {M}_\mathcal {D}}}_{{\text {ta}(\sigma )}}(\hat{\pi }). \end{aligned}$$
    (6)
  • If \(s\in \text {PS}\), then \( \displaystyle \int _{\pi = \pi '\xrightarrow {\alpha } s' \in \langle {\hat{\pi }} \rangle } \rho _{}(s, \alpha ) \,\text {d}\text {Pr}^{\mathcal {M}}_{\sigma }(\pi ) = \rho _{}^{\mathcal {D}}(s, \alpha ) \cdot \text {Pr}^{{\mathcal {M}_\mathcal {D}}}_{{\text {ta}(\sigma )}}(\hat{\pi })\) follows similarly.

C Proofs for timed reachability

1.1 C.1 Proof of Lemma 4

Lemma 4

Let \(\mathcal {M}\) be an MA with scheduler \(\sigma \in \text {GM}^{}\), digitization \({\mathcal {M}_{\delta }}\), and digital path \(\bar{\pi }\in { FPaths ^{{\mathcal {M}_{\delta }}}}\). It holds that

$$\begin{aligned} \text {Pr}^{\mathcal {M}}_{\sigma }( [{\bar{\pi }}]_{\textit{cyl}} ) = \text {Pr}^{{\mathcal {M}_{\delta }}}_{{\text {di}(\sigma )}}(\bar{\pi }). \end{aligned}$$

Proof

The proof is by induction over the length n of \(\bar{\pi }\). Let \(\mathcal {M}= (S, Act , \rightarrow , {s_{0}}, (\rho _{1}, \!\dots \!, \rho _{\ell }) )\) and \({\mathcal {M}_{\delta }}= (S, Act , \mathbf{P} _{\delta }, {s_{0}}, (\rho _{1}^\delta , \dots , \rho _{\ell }^\delta ))\). If \(n=0\), then \(\bar{\pi }= {s_{0}}\) and \([{\bar{\pi }}]_{\textit{cyl}} = { IPaths ^{\mathcal {M}}}\). Hence, \(\text {Pr}^{\mathcal {M}}_{\sigma }([{{s_{0}}}]_{\textit{cyl}}) = 1 = \text {Pr}^{{\mathcal {M}_{\delta }}}_{{\text {di}(\sigma )}}({s_{0}})\). In the induction step it is assumed that the lemma holds for a fixed path \(\bar{\pi }\in { FPaths ^{{\mathcal {M}_{\delta }}}}\) with \(|\bar{\pi }| = n\) and \( last (\bar{\pi }) = s\). Consider a path \(\bar{\pi }\xrightarrow {\alpha _{}} s' \in { FPaths ^{{\mathcal {M}_{\delta }}}}\).

If \(\text {Pr}^{\mathcal {M}}_{\sigma } ([{\bar{\pi }}]_{\textit{cyl}}) = \text {Pr}^{{\mathcal {M}_{\delta }}}_{{\text {di}(\sigma )}}(\bar{\pi }) = 0\), then \(\text {Pr}^{\mathcal {M}}_{\sigma }([{\bar{\pi }\xrightarrow {\alpha _{}} s'}]_{\textit{cyl}}) = \text {Pr}^{{\mathcal {M}_{\delta }}}_{{\text {di}(\sigma )}}(\bar{\pi }\xrightarrow {\alpha _{}} s') = 0\) because \([{\bar{\pi }\xrightarrow {\alpha _{}} s'}]_{\textit{cyl}} \subseteq [{\bar{\pi }}]_{\textit{cyl}}\) and \( Cyl (\{\bar{\pi }\xrightarrow {\alpha _{}} s'\}) \subseteq Cyl (\{\bar{\pi }\})\).

Now assume \(\text {Pr}^{\mathcal {M}}_{\sigma } ([{\bar{\pi }}]_{\textit{cyl}}) > 0\). We distinguish the following cases.

\(\underline{Case s\in \text {PS}:}\) It follows that \([{\bar{\pi }\xrightarrow {\alpha _{}} s'}]_{\textit{cyl}} = Cyl ([{\bar{\pi }\xrightarrow {\alpha _{}} s'}])\) since \(\bar{\pi }\xrightarrow {\alpha _{}} s'\) ends with a probabilistic transition. Hence,

$$\begin{aligned} \text {Pr}^{\mathcal {M}}_{\sigma }([{\bar{\pi }\xrightarrow {\alpha _{}} s'}]_{\textit{cyl}})&= \text {Pr}^{\mathcal {M}}_{\sigma }([{\bar{\pi }\xrightarrow {\alpha _{}} s'}])\\&= \int _{\pi \in [{\bar{\pi }}]} \sigma (\pi ,\alpha ) \cdot \mathbf{P} (s, \alpha , s') \,\text {d}\text {Pr}^{\mathcal {M}}_{\sigma }(\pi ) \\&= \int _{\pi \in [{\bar{\pi }}]} \sigma (\pi ,\alpha ) \cdot \mathbf{P} (s, \alpha , s') \,\text {d}\text {Pr}^{\mathcal {M}}_{\sigma }(\{\pi \} \cap [{\bar{\pi }}]) \\&= \int _{\pi \in [{\bar{\pi }}]} \sigma (\pi ,\alpha ) \cdot \mathbf{P} (s, \alpha , s') \,\text {d}\big [\text {Pr}^{\mathcal {M}}_{\sigma }(\pi \mid [{\bar{\pi }}]) \cdot \text {Pr}^{\mathcal {M}}_{\sigma }([{\bar{\pi }}])\big ] \\&= \text {Pr}^{\mathcal {M}}_{\sigma }([{\bar{\pi }}]) \cdot \mathbf{P} (s, \alpha , s') \cdot \int _{\pi \in [{\bar{\pi }}]} \sigma (\pi ,\alpha ) \,\text {d}\text {Pr}^{\mathcal {M}}_{\sigma }(\pi \mid [{\bar{\pi }}]) \\&= \text {Pr}^{\mathcal {M}}_{\sigma }([{\bar{\pi }}]) \cdot \mathbf{P} (s, \alpha , s') \cdot {\text {di}(\sigma )}(\bar{\pi },\alpha ) \\&\overset{{\textit{IH}}}{=}\text {Pr}^{{\mathcal {M}_{\delta }}}_{{\text {di}(\sigma )}}(\bar{\pi }) \cdot \mathbf{P} (s, \alpha , s') \cdot {\text {di}(\sigma )}(\bar{\pi },\alpha ) \\&= \text {Pr}^{{\mathcal {M}_{\delta }}}_{{\text {di}(\sigma )}}(\bar{\pi }\xrightarrow {\alpha _{}} s'). \end{aligned}$$

\(\underline{Case s\in \text {MS}:}\) As \(s\in \text {MS}\) we have \(\alpha = \bot \) and it follows

$$\begin{aligned} \text {Pr}^{\mathcal {M}}_{\sigma }([{\bar{\pi }\xrightarrow {\bot } s'}]_{\textit{cyl}})&= \text {Pr}^{\mathcal {M}}_{\sigma }([{\bar{\pi }}]_{\textit{cyl}} \cap [{\bar{\pi }\xrightarrow {\bot } s'}]_{\textit{cyl}}) \nonumber \\&= \text {Pr}^{\mathcal {M}}_{\sigma }([{\bar{\pi }}]_{\textit{cyl}}) \cdot \text {Pr}^{\mathcal {M}}_{\sigma }([{\bar{\pi }\xrightarrow {\bot } s'}]_{\textit{cyl}} \mid [{\bar{\pi }}]_{\textit{cyl}} ). \end{aligned}$$
(7)

Assume that a path \(\pi \in [{\bar{\pi }}]_{\textit{cyl}}\) has been observed, i.e., \( pref ({\text {di}(\pi )}, m) = \bar{\pi }\) holds for some \(m\ge 0\). The term \(\text {Pr}^{\mathcal {M}}_{\sigma }([{\bar{\pi }\xrightarrow {\bot } s'}]_{\textit{cyl}} \mid [{\bar{\pi }}]_{\textit{cyl}} )\) coincides with the probability that also \( pref ({\text {di}(\pi )}, m+1) = \bar{\pi }\xrightarrow {\bot } s'\) holds.

We have either

  • \(s\ne s'\) which means that the transition from \(s\) to \(s'\) has to be taken during a period of \(\delta \) time units or

  • \(s= s'\) where we additionally have to consider the case that no transition is taken at \(s\) for \(\delta \) time units.

It follows that

$$\begin{aligned} \text {Pr}^{\mathcal {M}}_{\sigma }([{\bar{\pi }\xrightarrow {\bot } s'}]_{\textit{cyl}} \mid [{\bar{\pi }}]_{\textit{cyl}} )&= {\left\{ \begin{array}{ll} \mathbf{P} (s, \bot , s') (1-e^{-{\text {E}(s)} \delta }) &{} \text {if } s\ne s' \\ \mathbf{P} (s, \bot , s') (1-e^{-{\text {E}(s)} \delta }) + e^{-{\text {E}(s)} \delta } &{} \text {if } s= s' \end{array}\right. }\nonumber \\&= \mathbf{P} _{\delta }(s, \bot , s'). \end{aligned}$$
(8)

We conclude that

$$\begin{aligned} \text {Pr}^{\mathcal {M}}_{\sigma }([{\bar{\pi }\xrightarrow {\bot } s'}]_{\textit{cyl}}) \ \&\overset{(7),\, (8)}{=} \ \ \text {Pr}^{\mathcal {M}}_{\sigma }([{\bar{\pi }}]_{\textit{cyl}}) \cdot \mathbf{P} _{\delta }(s, \bot , s') \\&\overset{{\textit{IH}}}{=}\ \ \text {Pr}^{{\mathcal {M}_{\delta }}}_{{\text {di}(\sigma )}}(\bar{\pi }) \cdot \mathbf{P} _{\delta }(s, \bot , s') = \text {Pr}^{{\mathcal {M}_{\delta }}}_{{\text {di}(\sigma )}}(\bar{\pi }\xrightarrow {\bot } s'). \end{aligned}$$

\(\square \)

1.2 C.2 Proof of Proposition 4

Proposition 4

Let \(\mathcal {M}\) be an MA with \(G\subseteq S\), \(\sigma \in \text {GM}^{}\), and digitization \({\mathcal {M}_{\delta }}\). Further, let \(J\subseteq \mathbb {N}\) be a set of consecutive natural numbers. It holds that

$$\begin{aligned} \text {Pr}^{\mathcal {M}}_{\sigma }([{\lozenge ^{J}_{\text {ds}} G}]) = \text {Pr}^{{\mathcal {M}_{\delta }}}_{{\text {di}(\sigma )}}(\lozenge ^{J}_{\text {ds}} G). \end{aligned}$$

Proof

Consider the set \(\varPi _G^J\subseteq { FPaths ^{{\mathcal {M}_{\delta }}}}\) of paths that (i) visit \(G\) within \(J\) digitization steps and (ii) do not have a proper prefix that satisfies (i). Every path in \(\lozenge ^{J}_{\text {ds}} G\) has a unique prefix in \(\varPi _G^J\), yielding

For the corresponding paths of \(\mathcal {M}\) we obtain

The proposition follows with Lemma 4 since

$$\begin{aligned} \text {Pr}^{{\mathcal {M}_{\delta }}}_{{\text {di}(\sigma )}}(\lozenge ^{J}_{\text {ds}} G) = \sum _{\bar{\pi }\in \varPi _G^J} \text {Pr}^{{\mathcal {M}_{\delta }}}_{{\text {di}(\sigma )}}(\bar{\pi }) \overset{Lem.\,4}{=} \sum _{\bar{\pi }\in \varPi _G^J} \text {Pr}^{\mathcal {M}}_{\sigma }( [{\bar{\pi }}]_{\textit{cyl}}) = \text {Pr}^{\mathcal {M}}_{\sigma }([{\lozenge ^{J}_{\text {ds}} G}]). \end{aligned}$$

\(\square \)

1.3 C.3 Proofs of Lemmas 6 and 7

Lemma 7

Let \(\mathcal {M}\) be an MA with \(\sigma \in \text {GM}^{}\) and maximum rate \(\lambda = \max \{{\text {E}(s)} \mid s\in \text {MS}\}\). For each \(\delta \in \mathbb {R}_{> 0}\), \(k\in \mathbb {N}\), and \(t\in \mathbb {R}_{\ge 0}\) it holds that

$$\begin{aligned} \text {Pr}^{\mathcal {M}}_{\sigma }(\#^{}[{k\delta + t}]^{{\le }{ k}}) \ge \text {Pr}^{\mathcal {M}}_{\sigma }(\#^{}[{k\delta }]^{{\le }{ k}}) \cdot e^{-\lambda t}\ . \end{aligned}$$

Proof

First, we show that the set \(\#^{}[{k\delta + t}]^{{\le }{ k}}\) corresponds to the paths of \(\#^{}[{k\delta }]^{{\le }{ k}}\) with the additional requirement that no transition is taken between the time points \(k\delta \) and \(k\delta + t\), i.e.,

$$\begin{aligned} \#^{}[{k\delta + t}]^{{\le }{ k}} = \{\pi \in \#^{}[{k\delta }]^{{\le }{ k}} \mid \text {there is no prefix } \pi ' \text { of } \pi \text { with } k\delta < T (\pi ') \le k\delta + t\}. \end{aligned}$$
  • \(\subseteq \)”: If \(\pi \in \#^{}[{k\delta + t}]^{{\le }{ k}}\), then \(\pi \in \#^{}[{k\delta }]^{{\le }{ k}}\) follows immediately. Furthermore, assume towards a contradiction that there is a prefix \(\pi '\) of \(\pi \) with \(k\delta < T (\pi ') \le k\delta + t\). Then, \(k< \frac{ T (\pi ')}{\delta } \le {|\pi '|_{\text {ds}}} \) (cf. Lemma 5). As \( T (\pi ') \le k\delta + t\), this means that \({| pref _{ T }(\pi , k\delta + t)|_{\text {ds}}} \ge {|\pi '|_{\text {ds}}} > k\) which contradicts \(\pi \in \#^{}[{k\delta + t}]^{{\le }{ k}}\).

  • \(\supseteq \)”: For \(\pi \in \#^{}[{k\delta }]^{{\le }{ k}}\) with no prefix \(\pi '\) such that \(k\delta < T (\pi ') \le k\delta + t\), it holds that \( pref _{ T }(\pi , k\delta + t) = pref _{ T }(\pi , k\delta )\). Hence, \({| pref _{ T }(\pi , k\delta + t)|_{\text {ds}}} = {| pref _{ T }(\pi , k\delta )|_{\text {ds}}} \le k\) and it follows that \(\pi \in \#^{}[{k\delta + t}]^{{\le }{ k}}\).

The probability for no transition to be taken between \(k\delta \) and \(k\delta + t\) only depends on the current state at time point \(k\delta \). More precisely, for some state \(s\in \text {MS}\) assume the set of paths \(\{\pi \in \#^{}[{k\delta }]^{{\le }{ k}} \mid last ( pref _{ T }(\pi , k\delta )) = s\}\). The probability that a path in this set takes no transition between time points \(k\delta \) and \(k\delta + t\) is given by \(e^{-{\text {E}(s)} t}\). With \(\lambda \ge {\text {E}(s)}\) for all \(s\in \text {MS}\) it follows that

$$\begin{aligned}&\text {Pr}^{\mathcal {M}}_{\sigma }(\#^{}[{k\delta + t}]^{{\le }{ k}}) \\&\quad = \text {Pr}^{\mathcal {M}}_{\sigma }( \{\pi \in \#^{}[{k\delta }]^{{\le }{ k}} \mid \text {there is no prefix } \pi ' \text { of } \pi \text { with } k\delta < T (\pi ') \le k\delta + t\} ) \\&\quad = \sum _{s\in \text {MS}} \text {Pr}^{\mathcal {M}}_{\sigma }( \{\pi \in \#^{}[{k\delta }]^{{\le }{ k}} \mid last ( pref _{ T }(\pi , k\delta )) = s\} ) \cdot e^{-{\text {E}(s)} t}\\&\quad \ge \sum _{s\in \text {MS}} \text {Pr}^{\mathcal {M}}_{\sigma }( \{\pi \in \#^{}[{k\delta }]^{{\le }{ k}} \mid last ( pref _{ T }(\pi , k\delta )) = s\} ) \cdot e^{-\lambda t}\\&\quad = \text {Pr}^{\mathcal {M}}_{\sigma }(\#^{}[{k\delta }]^{{\le }{ k}}) \cdot e^{-\lambda t}\ . \end{aligned}$$

\(\square \)

Lemma 6

Let \(\mathcal {M}\) be an MA with \(\sigma \in \text {GM}^{}\) and maximum rate \(\lambda = \max \{{\text {E}(s)} \mid s\in \text {MS}\}\). Further, let \(\delta \in \mathbb {R}_{> 0}\) and \(k\in \mathbb {N}\). It holds that

$$\begin{aligned} \text {Pr}^{\mathcal {M}}_{\sigma }(\#^{}[{k\delta }]^{{>}{k}}) \le 1- (1+ \lambda \delta )^{k} \cdot e^{- \lambda \delta k} \end{aligned}$$

Proof

Let \(\mathcal {M}= (S, Act , \rightarrow , {s_{0}}, (\rho _{1}, \!\dots \!, \rho _{\ell }) )\). By induction over \(k\) we show that

$$\begin{aligned} \text {Pr}^{\mathcal {M}}_{\sigma }(\#^{}[{k\delta }]^{{\le }{ k}}) \ge (1+ \lambda \delta )^{k} \cdot e^{- \lambda \delta k}. \end{aligned}$$

The claim follows as \(\#^{}[{k\delta }]^{{>}{k}} = { IPaths ^{\mathcal {M}}} {\setminus } \#^{}[{k\delta }]^{{\le }{k}}\).

For \(k=0\), we have \(\pi \in \#^{}[{0 \cdot \delta }]^{{\le }{ 0}}\) iff \(\pi \) takes no Markovian transition at time point zero. As this happens with probability one, it follows that

$$\begin{aligned} \text {Pr}^{\mathcal {M}}_{\sigma }(\#^{}[{0 \cdot \delta }]^{{\le }{ 0}}) = 1 = (1+ \lambda \delta )^0 \cdot e^{-\lambda \delta \cdot 0 }\ . \end{aligned}$$

We assume in the induction step that the proposition holds for some fixed \(k\). We distinguish between two cases for the initial state \({s_{0}}\) of \(\mathcal {M}\).

\(\underline{Case \,{s_{0}}\in \text {MS}}:\) We partition the set with

$$\begin{aligned} \varLambda ^{\ge \delta }&= \{ {s_{0}}\xrightarrow {t} s_{1} \xrightarrow {\kappa _{1}} \dots \in \#^{}[{k\delta + \delta }]^{{\le }{ k+ 1}} \mid t\ge \delta \} \text { and } \\ \varLambda ^{< \delta }&= \{ {s_{0}}\xrightarrow {t} s_{1} \xrightarrow {\kappa _{1}} \dots \in \#^{}[{k\delta + \delta }]^{{\le }{ k+ 1}} \mid t< \delta \}. \end{aligned}$$

Hence, \(\varLambda ^{\ge \delta }\) contains the paths where we wait at least \(\delta \) time units at \({s_{0}}\) and \(\varLambda ^{< \delta }\) contains the paths where the first transition is taken within \(t< \delta \) time units. It follows that \(\text {Pr}^{\mathcal {M}}_{\sigma }(\#^{}[{k\delta + \delta }]^{{\le }{ k+ 1}} ) = \text {Pr}^{\mathcal {M}}_{\sigma }(\varLambda ^{\ge \delta }) + \text {Pr}^{\mathcal {M}}_{\sigma }(\varLambda ^{< \delta })\). We consider the probabilities for \(\varLambda ^{\ge \delta }\) and \(\varLambda ^{< \delta }\) separately.

  • \(\text {Pr}^{\mathcal {M}}_{\sigma }(\varLambda ^{\ge \delta })\): For a path \({s_{0}}\xrightarrow {t+\delta } s_{1} \xrightarrow {\kappa _{1}} \dots \in \varLambda ^{\ge \delta }\), after the first \(\delta \) time units there are at most \(k\) digitization steps within the next \(k\delta \) time units, i.e.,

    $$\begin{aligned} {s_{0}}\xrightarrow {t+ \delta } s_{1} \xrightarrow {\kappa _{1}} \dots \in \varLambda ^{\ge \delta }\iff {s_{0}}\xrightarrow {t} s_{1} \xrightarrow {\kappa _{1}} \dots \in \#^{}[{k\delta }]^{{\le }{ k}}. \end{aligned}$$

    The probability for \(\varLambda ^{\ge \delta }\) can therefore be derived from the probability to wait at \({s_{0}}\) for at least \(\delta \) time units and the probability for \(\#^{}[{k\delta }]^{{\le }{ k}}\). In order to apply this, we need to modify the considered scheduler as it might depend on the sojourn time in \({s_{0}}\). Let \(\sigma _\delta \) be the scheduler for \(\mathcal {M}\) that mimics \(\sigma \) on paths where the first transition is delayed by \(\delta \), i.e., \(\sigma _\delta \) satisfies

    $$\begin{aligned} \sigma _\delta ({s_{0}}\xrightarrow {t} \dots {\xrightarrow {\kappa _{n-1}} s_{n}},\alpha ) = \sigma ({s_{0}}\xrightarrow {t+ \delta } \dots {\xrightarrow {\kappa _{n-1}} s_{n}},\alpha ). \end{aligned}$$

    for all \({s_{0}}\xrightarrow {t} \dots {\xrightarrow {\kappa _{n-1}} s_{n}} \in { FPaths ^{\mathcal {M}}}\) and \(\alpha \in Act \). It holds that

    $$\begin{aligned} \text {Pr}^{\mathcal {M}}_{\sigma }( \varLambda ^{\ge \delta })&= e^{-{\text {E}({s_{0}})} \delta } \cdot \text {Pr}^{\mathcal {M}}_{\sigma _\delta }( \#^{}[{k\delta }]^{{\le }{ k}}) \nonumber \\&\overset{{\textit{IH}}}{\ge }e^{-{\text {E}({s_{0}})} \delta } \cdot (1+ \lambda \delta )^{k} \cdot e^{- \lambda \delta k} \nonumber \\&= e^{-{\text {E}({s_{0}})} \delta } \cdot (1+ \lambda \delta )^{k} \cdot e^{- \lambda \delta k} \cdot e^{-\lambda \delta } \cdot e^{\lambda \delta } \nonumber \\&= (1+ \lambda \delta )^{k} \cdot e^{-\lambda \delta (k+1)} \cdot e^{(\lambda -{\text {E}({s_{0}})}) \delta }\ . \end{aligned}$$
    (9)
  • \(\text {Pr}^{\mathcal {M}}_{\sigma }(\varLambda ^{< \delta })\): For a path \({s_{0}}\xrightarrow {t} s_{1} \xrightarrow {\kappa _{1}} \dots \in \varLambda ^{< \delta }\), the first digitization step happens at less than \(\delta \) time units, i.e., \(0 \le t<\delta \). It follows that there are at most \(k\) digitization steps in the remaining \( k\delta + \delta - t\) time units, i.e.,

    $$\begin{aligned} {s_{0}}\xrightarrow {t} s_{1} \xrightarrow {\kappa _{1}} s_{2} \xrightarrow {\kappa _{2}} \dots \in \varLambda ^{< \delta }\iff s_{1} \xrightarrow {\kappa _{1}} s_{2} \xrightarrow {\kappa _{2}} \dots \in \#^{s_1}[{ k\delta + \delta - t}]^{{\le k}{\ }}, \end{aligned}$$

    where \(\#^{s_1}[{ k\delta + \delta - t}]^{{\le }{ k}}\) refers to the paths \(\#^{}[{ k\delta + \delta - t}]^{{\le }{ k}}\) of \({\mathcal {M}^{s_1}} = (S, Act ,\rightarrow ,s_1, (\rho _{1}, \dots , \rho _{\ell }))\), the MA obtained from \(\mathcal {M}\) by changing the initial state to \(s_1\). Hence, the probability for \(\varLambda ^{< \delta }\) can be derived from the probability to take a transition from \({s_{0}}\) to some state \(s\) within \( t< \delta \) time units and the probability for \(\#^{s}[{ k\delta + \delta - t}]^{{\le }{ k}} \). Again, we need to adapt the considered scheduler. Let \(\pi \in { FPaths ^{\mathcal {M}}}\) with \( last (\pi ) = s\). The scheduler \({\sigma [{\pi }]}\) for \({\mathcal {M}^{s}}\) mimics the scheduler \(\sigma \) for \(\mathcal {M}\), where \(\pi \) is prepended to the given path, i.e., we set

    $$\begin{aligned} {\sigma [{\pi }]}(s\xrightarrow {\kappa _{j}} \dots {\xrightarrow {\kappa _{n-1}} s_{n}},\alpha ) = \sigma (\pi \xrightarrow {\kappa _{j}} \dots {\xrightarrow {\kappa _{n-1}} s_{n}},\alpha ) \end{aligned}$$

    for all \(s\xrightarrow {\kappa _{j}} \dots {\xrightarrow {\kappa _{n-1}} s_{n}} \in { FPaths ^{{\mathcal {M}^{s}}}}\) and \(\alpha \in Act \). With Lemma 7 it follows that

    $$\begin{aligned}&\text {Pr}^{\mathcal {M}}_{\sigma }( \varLambda ^{< \delta })\nonumber \\&\quad = \int _{0}^{\delta } {\text {E}({s_{0}})} \cdot e^{-{\text {E}({s_{0}})} t} \cdot \left( \sum _{s\in S} \mathbf{P} ({s_{0}}, \bot , s) \cdot \text {Pr}^{{\mathcal {M}^{s}}}_{{\sigma [{\pi }]}}( \#^{s}[{ k\delta + \delta - t}]^{{\le }{ k}} ) \right) \,\text {d}t\nonumber \\&\quad \ge \int _{0}^{\delta } {\text {E}({s_{0}})} \cdot e^{-{\text {E}({s_{0}})} t} \cdot \left( \sum _{s\in S} \mathbf{P} ({s_{0}}, \bot , s) \cdot \text {Pr}^{{\mathcal {M}^{s}}}_{{\sigma [{\pi }]}}( \#^{s}[{ k\delta }]^{{\le }{ k}} ) \cdot e^{-\lambda (\delta - t)} \right) \,\text {d}t\nonumber \\&\quad \overset{{\textit{IH}}}{\ge }\int _{0}^{\delta } {\text {E}({s_{0}})} \cdot e^{-{\text {E}({s_{0}})} t} \cdot \left( \sum _{s\in S} \mathbf{P} ({s_{0}}, \bot , s) \cdot (1+\lambda \delta )^{k} \cdot e^{- \lambda \delta k} \cdot e^{-\lambda (\delta - t)} \right) \,\text {d}t\nonumber \\&\quad = (1+\lambda \delta )^{k} \cdot e^{- \lambda \delta k} \cdot {\text {E}({s_{0}})} \cdot \int _{0}^{\delta } e^{-{\text {E}({s_{0}})} t} \cdot e^{-\lambda (\delta - t)} \cdot \left( \sum _{s\in S} \mathbf{P} ({s_{0}}, \bot , s) \right) \,\text {d}t\nonumber \\&\quad = (1+\lambda \delta )^{k} \cdot e^{- \lambda \delta k} \cdot {\text {E}({s_{0}})} \cdot \int _{0}^{\delta } e^{-{\text {E}({s_{0}})} t} \cdot e^{-\lambda \delta } \cdot e^{\lambda t} \,\text {d}t\nonumber \\&\quad = (1+\lambda \delta )^{k} \cdot e^{-\lambda \delta (k+1) } \cdot {\text {E}({s_{0}})} \cdot \int _{0}^{\delta } e^{(\lambda -{\text {E}({s_{0}})}) t} \,\text {d}t\ . \end{aligned}$$
    (10)

Combining the results for \(\varLambda ^{\ge \delta }\) and \(\varLambda ^{< \delta }\) (i.e., Equations 9 and 10), we obtain

$$\begin{aligned}&\text {Pr}^{\mathcal {M}}_{\sigma }(\#^{}[{k\delta + \delta }]^{{\le }{ k+ 1}}) \\&\quad = \text {Pr}^{\mathcal {M}}_{\sigma }(\varLambda ^{\ge \delta }) + \text {Pr}^{\mathcal {M}}_{\sigma }(\varLambda ^{< \delta }) \\&\quad \ge (1+ \lambda \delta )^{k} \cdot e^{-\lambda \delta (k+1) } \cdot \Big ( e^{(\lambda -{\text {E}({s_{0}})}) \delta } + {\text {E}({s_{0}})} \cdot \int _{0}^{\delta } e^{(\lambda -{\text {E}({s_{0}})}) t} \,\text {d}t\Big )\\&\quad \overset{*}{\ge } (1+ \lambda \delta )^{k} \cdot e^{-\lambda \delta (k+1) } \cdot \left( 1 + \lambda \delta \right) = (1+ \lambda \delta )^{k+1} \cdot e^{-\lambda \delta (k+1) }\ , \end{aligned}$$

where the inequality marked with \(*\) is due to

\(\underline{Case \,{s_{0}}\in \text {PS}:}\) Since \(\mathcal {M}\) is non-zeno, a state \(s\in \text {MS}\) is reached from \({s_{0}}\) within zero time almost surely (i.e., with probability one). From the previous case, it already follows that the Proposition holds for \({\mathcal {M}^{s}}\) with \(s\in \text {MS}\) and the set \(\#^{s}[{ k\delta + \delta }]^{{\le }{ k+1}}\). With \(\varPi _\text {MS}= \{s_{0} \xrightarrow {\kappa _{0}} \dots {\xrightarrow {\kappa _{n-1}} s_{n}}\in { FPaths ^{\mathcal {M}}} \mid s_n \in \text {MS}\text { and } \forall i < n :s_i \in \text {PS}\}\) we obtain

$$\begin{aligned} \text {Pr}^{\mathcal {M}}_{\sigma }(\#^{}[{k\delta + \delta }]^{{\le }{ k+ 1}})&= \int _{\begin{array}{c} \pi \in \varPi _\text {MS}\\ last (\pi ) = s \end{array}} \text {Pr}^{{\mathcal {M}^{s}}}_{{\sigma [{\pi }]}}(\#^{s}[{ k\delta + \delta }]^{{\le }{ k+1}}) \,\text {d}\text {Pr}^{\mathcal {M}}_{\sigma }(\pi )\\&\ge \int _{\begin{array}{c} \pi \in \varPi _\text {MS}\\ last (\pi ) = s \end{array}} (1+\lambda \delta )^{k+1} \cdot e^{-\lambda \delta (k+1) } \,\text {d}\text {Pr}^{\mathcal {M}}_{\sigma }(\pi )\\&= (1+\lambda \delta )^{k+1} \cdot e^{-\lambda \delta (k+1) } \cdot \text {Pr}^{\mathcal {M}}_{\sigma }(\varPi _\text {MS}) \\&= (1+\lambda \delta )^{k+1} \cdot e^{-\lambda \delta (k+1) } \ . \end{aligned}$$

\(\square \)

1.4 C.4 Proof of Proposition 5

Proposition 5

For MA \(\mathcal {M}\), scheduler \(\sigma \in \text {GM}^{}\), goal states \(G\subseteq S\), digitization constant \(\delta \in \mathbb {R}_{> 0}\) and time interval \(I\)

$$\begin{aligned} \text {Pr}^{\mathcal {M}}_{\sigma }(\lozenge ^{I_{}} G_{}) \in \text {Pr}^{\mathcal {M}}_{\sigma }([{\lozenge ^{I}_{\text {ds}} G}]) + \Big [{-}\varepsilon ^{\downarrow }_{}(I),\, \varepsilon ^{\uparrow }_{}(I)\Big ]. \end{aligned}$$

We show Eq. 3, that is,

$$\begin{aligned} \text {Pr}^{\mathcal {M}}_{\sigma }([{\lozenge ^{{\text {di}(I)}}_{\text {ds}} G}] {\setminus } \lozenge ^{I} G) \le \varepsilon ^{\downarrow }_{}(I) \text { and } \text {Pr}^{\mathcal {M}}_{\sigma }(\lozenge ^{I} G{\setminus } [{\lozenge ^{{\text {di}(I)}}_{\text {ds}} G}]) \le \varepsilon ^{\uparrow }_{}(I) \end{aligned}$$

for the remaining forms of the time interval \(I\).

\(\underline{Case \, I= [0, \infty ):}\) In this case we have \({\text {di}(I)}= \mathbb {N}\). It follows that

$$\begin{aligned}{}[{\lozenge ^{{\text {di}(I)}}_{\text {ds}} G}] = \lozenge ^{I} G= \{s_{0} \xrightarrow {\kappa _{0}} s_{1} \xrightarrow {\kappa _{1}} \dots \in { IPaths ^{\mathcal {M}}} \mid s_i \in G\text { for some } i \ge 0 \}. \end{aligned}$$

Hence,

$$\begin{aligned} \text {Pr}^{\mathcal {M}}_{\sigma }([{\lozenge ^{{\text {di}(I)}}_{\text {ds}} G}] {\setminus } \lozenge ^{I} G) = \text {Pr}^{\mathcal {M}}_{\sigma }(\lozenge ^{I} G{\setminus } [{\lozenge ^{{\text {di}(I)}}_{\text {ds}} G}]) = \text {Pr}^{\mathcal {M}}_{\sigma }(\emptyset ) = 0 = \varepsilon ^{\downarrow }_{}(I) = \varepsilon ^{\uparrow }_{}(I). \end{aligned}$$

\(\underline{Case \, I= [a,\infty ) for a= \text {di}_a\delta :}\) We have \({\text {di}(I)}= \{\text {di}_a+1, \text {di}_a+2, \dots \}\).

  • We show that \([{\lozenge ^{{\text {di}(I)}}_{\text {ds}} G}] {\setminus } \lozenge ^{I} G\subseteq \#^{}[{a}]^{{>}{\text {di}_a}}\). With Lemma 6 we obtain

    $$\begin{aligned} \text {Pr}^{\mathcal {M}}_{\sigma }([{\lozenge ^{{\text {di}(I)}}_{\text {ds}} G}] {\setminus } \lozenge ^{I} G) \le \text {Pr}^{\mathcal {M}}_{\sigma }(\#^{}[{a}]^{{>}{\text {di}_a}}) \le 1 - (1+ \lambda \delta )^{\text {di}_a} \cdot e^{- \lambda a} = \varepsilon ^{\downarrow }_{}(I). \end{aligned}$$

    Consider a path \(\pi \in [{\lozenge ^{{\text {di}(I)}}_{\text {ds}} G}] {\setminus } \lozenge ^{I} G\). As \(\pi \notin \lozenge ^{I} G\), it follows that \(\pi \) has to reach (and leave) \(G\) within less than \(a\) time units. Let \(\bar{\pi }\) be the largest prefix of \({\text {di}(\pi )}\) that satisfies \( last (\bar{\pi }) \in G\). Our observations yield that \(\pi \) leaves \( last (\bar{\pi })\) before time point \(a\). Hence, \(\bar{\pi }\) is a prefix of \(\text {di}( pref _{ T }(\pi , a))\). Moreover, \({|\bar{\pi }|_{\text {ds}}} \in {\text {di}(I)}\) as \({\text {di}(\pi )}\in \lozenge ^{{\text {di}(I)}}_{\text {ds}} G\). It follows that \({| pref _{ T }(\pi , a)|_{\text {ds}}} \ge {|\bar{\pi }|_{\text {ds}}} > \text {di}_a\) which implies \(\pi \in \#^{}[{a}]^{{>}{\text {di}_a}}\).

  • Now consider a path \(\pi \in \lozenge ^{I} G{\setminus } [{\lozenge ^{{\text {di}(I)}}_{\text {ds}} G}]\). \(\pi \) visits \(G\) at least once since \(\pi \in \lozenge ^{I} G\). Moreover, \({\text {di}(\pi )}\) does not visit \(G\) after \(\text {di}_a\) digitization steps due to \(\pi \notin [{\lozenge ^{{\text {di}(I)}}_{\text {ds}} G}]\). This means \(\pi \) visits \(G\) only finitely often. Let \(\pi ' = s_{0} \xrightarrow {\kappa _{0}} \dots {\xrightarrow {\kappa _{n-1}} s_{n}}\) be the largest prefix of \(\pi \) such that \(s_n \in G\). Notice that \({|\pi '|_{\text {ds}}} \le \text {di}_a\) holds. Let \(\pi ' \xrightarrow {\kappa _{}} s\) be the prefix of \(\pi \) of length \(|\pi '|+1\). We show by contradiction that \(a\le T (\pi ' \xrightarrow {\kappa _{}} s) < a+ \delta \) holds:

    • If \( T (\pi ' \xrightarrow {\kappa _{}} s) < a\), then \( last (\pi ') \in G\) is left before time point \(a\) which contradicts \(\pi \in \lozenge ^{I} G\).

    • Further, assume that \( T (\pi ' \xrightarrow {\kappa _{}} s) \ge a+ \delta \). With Lemma 5 we obtain

      $$\begin{aligned} t(\kappa _{})&\ge a+ \delta - T (\pi ')\\&\ge a+ \delta - {|\pi '|_{\text {ds}}} \cdot \delta \\&\ge (\text {di}_a+ 1 - \underbrace{{|\pi '|_{\text {ds}}}}_{\le \text {di}_a}) \cdot \delta > 0\ . \end{aligned}$$

      Hence, \(\pi \) stays at \( last (\pi ')\) for at least \((j + 1 - {|\pi '|_{\text {ds}}}) \cdot \delta \) time units which means that \(\text {di}(\pi ') \big ({\xrightarrow {\bot }} last (\pi ')\big )^{j+1 - {|\pi '|_{\text {ds}}}} = \bar{\pi }\) is a prefix of \({\text {di}(\pi )}\). Since \({|\bar{\pi }|_{\text {ds}}} = j+1\), this contradicts \(\pi \notin [{\lozenge ^{{\text {di}(I)}}_{\text {ds}} G}]\).

    We infer that \(\pi \) takes at least one transition in the time interval \([a, a+ \delta )\). The probability for this can be upper bounded by \(1-e^{-\lambda \delta }\), i.e.,

    $$\begin{aligned}&\text {Pr}^{\mathcal {M}}_{\sigma }(\lozenge ^{I} G{\setminus } [{\lozenge ^{{\text {di}(I)}}_{\text {ds}} G}]) \\&\quad \le \text {Pr}^{\mathcal {M}}_{\sigma }(\{ \pi \in { IPaths ^{\mathcal {M}}} \mid \pi \text { takes a transition in time interval } [a, a+ \delta ) \}) \\&\quad \le 1-e^{-\lambda \delta } = \varepsilon ^{\uparrow }_{}(I). \end{aligned}$$

\(\underline{Case \, I= [a,b] for a= \text {di}_a\delta and b= \text {di}_b\delta :}\) We have \({\text {di}(I)}= \{\text {di}_a+1, \text {di}_a+2, \dots , \text {di}_b\}\).

  • As in the case “\(I= [a, \infty )\)”, we show that \([{\lozenge ^{{\text {di}(I)}}_{\text {ds}} G}] {\setminus } \lozenge ^{I} G\subseteq \#^{}[{a}]^{{>}{\text {di}_a}}\). With Lemma 6 we obtain

    $$\begin{aligned} \text {Pr}^{\mathcal {M}}_{\sigma }([{\lozenge ^{{\text {di}(I)}}_{\text {ds}} G}] {\setminus } \lozenge ^{I} G) \le \text {Pr}^{\mathcal {M}}_{\sigma }(\#^{}[{a}]^{{>}{\text {di}_a}}) \le 1 - (1+ \lambda \delta )^{\text {di}_a} \cdot e^{- \lambda a} = \varepsilon ^{\downarrow }_{}(I). \end{aligned}$$

    Let \(\pi \in [{\lozenge ^{{\text {di}(I)}}_{\text {ds}} G}] {\setminus } \lozenge ^{I} G\) and let \(\bar{\pi }\) be the largest prefix of \({\text {di}(\pi )}\) with \( last (\bar{\pi }) \in G\) and \({|\bar{\pi }|_{\text {ds}}} \in {\text {di}(I)}\). Such a prefix exists due to \(\pi \in [{\lozenge ^{{\text {di}(I)}}_{\text {ds}} G}]\). \(\pi \) reaches \( last (\bar{\pi })\) with at most \(\text {di}_b\) digitization steps and therefore within at most \(b\) time units (cf. Lemma 5). As \(\pi \notin \lozenge ^{I} G\), we conclude that \(\pi \) has to reach (and leave) \( last (\bar{\pi })\) within less than \(a\) time units. It follows that \({| pref _{ T }(\pi , a)|_{\text {ds}}} \ge {|\bar{\pi }|_{\text {ds}}} > \text {di}_a\) which implies \(\pi \in \#^{}[{a}]^{{>}{\text {di}_a}}\).

  • Next, let \(\pi \in \lozenge ^{I} G{\setminus } [{\lozenge ^{{\text {di}(I)}}_{\text {ds}} G}]\) and let \(\pi ' = s_{0} \xrightarrow {\kappa _{0}} \dots {\xrightarrow {\kappa _{n-1}} s_{n}}\) be the largest prefix of \(\pi \) such that \(s_n \in G\) and \( T (\pi ') \le b\). Such a prefix exists due to \(\pi \in \lozenge ^{I} G\). We distinguish two cases.

    • If \({|\pi '|_{\text {ds}}} > \text {di}_b\), then \(\pi \in \#^{}[{b}]^{{>}{\text {di}_b}}\) since \( {| pref _{ T }(\pi , b)|_{\text {ds}}} \ge {|\pi '|_{\text {ds}}} > \text {di}_b\).

    • If \({|\pi '|_{\text {ds}}} \le \text {di}_b\), then \({|\pi '|_{\text {ds}}} \le \text {di}_a\) holds due to \(\pi \notin [{\lozenge ^{{\text {di}(I)}}_{\text {ds}} G}]\). Similar to the case “\(I= [a,\infty )''\) we can show that \(\pi \) takes at least one transition in time interval \([a, a+ \delta )\).

    It follows that

    $$\begin{aligned}&\lozenge ^{I} G{\setminus } [{\lozenge ^{{\text {di}(I)}}_{\text {ds}} G}] \\&\quad \subseteq \#^{}[{b}]^{{>}{\text {di}_b}} \cup \{ \pi \in { IPaths ^{\mathcal {M}}} \mid \pi \text { takes a transition in time interval } [a, a+ \delta ) \} \end{aligned}$$

    Hence,

    $$\begin{aligned} \text {Pr}^{\mathcal {M}}_{\sigma }(\lozenge ^{I} G{\setminus } [{\lozenge ^{{\text {di}(I)}}_{\text {ds}} G}]) \le 1 - (1+ \lambda \delta )^{\text {di}_b} \cdot e^{- \lambda b} + 1 - e^{-\lambda \delta } = \varepsilon ^{\uparrow }_{}(I). \end{aligned}$$

D Comparison to single-objective analysis

We remark that the proof in [30, Theorem 5.3] can not be adapted to show our result. The main reason is that the proof relies on an auxiliary lemma which claims thatFootnote 8

$$\begin{aligned} \text {Pr}^{\mathcal {M}}_{\sigma }(\lozenge ^{[0,b]} G\mid \#^{}[{\delta }]^{{<}{2}}) \le \text {Pr}^{\mathcal {M}}_{\sigma }(\lozenge ^{[0,b]} G) \end{aligned}$$
(11)

holds for all schedulers \(\sigma \in \text {GM}^{\mathcal {M}}\). We show that this claim does not hold. The intuition is as follows. Assume we observe that at most one Markovian transition is taken in \(\mathcal {M}\) within the first \(\delta \) time units (i.e., we observe a path in \(\#^{}[{\delta }]^{{<}{2}}\)). The lemma claims that under this observation the probability to reach \(G\) within \(b\) time units does not increase. We give a counterexample to illustrate that there are schedulers for which this is not true. Consider the MA \(\mathcal {M}\) from Fig. 12 and let \(\sigma \) be the scheduler for \(\mathcal {M}\) satisfying

$$\begin{aligned} \sigma (s_0 \xrightarrow {t_1} s_1 \xrightarrow {t_2} s_2,\alpha ) = {\left\{ \begin{array}{ll} 1 &{} \text {if } t_1 + t_2 > \delta \\ 0 &{} \text {otherwise}. \end{array}\right. } \end{aligned}$$

Hence, \(\sigma \) chooses \(\alpha \) iff there are less than two digitization steps within the first \(\delta \) time units. It follows that the probability to reach \(G= \{s_3\}\) on a path in \(\#^{}[{\delta }]^{{\ge }{2}}\) is zero. We conclude that

$$\begin{aligned} \text {Pr}^{\mathcal {M}}_{\sigma }(\lozenge ^{[0,b]} \{s_3\})&= \text {Pr}^{\mathcal {M}}_{\sigma }(\lozenge ^{[0,b]} \{s_3\} \cap \#^{}[{\delta }]^{{<}{2}}) + \underbrace{\text {Pr}^{\mathcal {M}}_{\sigma }(\lozenge ^{[0,b]} \{s_3\} \cap \#^{}[{\delta }]^{{\ge }{2}})}_{=0} \\&= \text {Pr}^{\mathcal {M}}_{\sigma }(\lozenge ^{[0,b]} \{s_3\} \mid \#^{}[{\delta }]^{{<}{2}}) \cdot \underbrace{\text {Pr}^{\mathcal {M}}_{\sigma }(\#^{}[{\delta }]^{{<}{2}})}_{<1} \\&< \text {Pr}^{\mathcal {M}}_{\sigma }(\lozenge ^{[0,b]} \{s_3\} \mid \#^{}[{\delta }]^{{<}{2}}) \end{aligned}$$

which contradicts Eq. 11.

Fig. 12
figure 12

MA \(\mathcal {M}\) (cf. “Appendix D”)

Table 2 Additional model details

E Further details for the experiments

1.1 E.1 Benchmark details

We provide additional information regarding our experiments on multi-objective MAs. Table 2 provides details of the considered MA. We further describe the considered case studies and objectives.

Job scheduling The job scheduling case study originates from [12] and was already discussed in Sect. 1. We consider N jobs that are executed on K identical processors. Each of the N jobs gets a different rate between 1 and 3. We consider the following objectives.

  • \(\mathbb {E}_1\): Minimize the expected time until all jobs are completed.

  • \(\mathbb {E}_2\): Minimize the expected time until \(\lceil \frac{N}{2}\rceil \) jobs are completed.

  • \(\mathbb {E}_3\): Minimize the expected waiting time of the jobs.

  • \(\mathbb {P}\): Minimize the probability that the job with the lowest rate is completed before the job with the highest rate.

  • \(\mathbb {P}_1^\le \): Maximize the probability that all jobs are completed within \(\frac{N}{2K}\) time units.

  • \(\mathbb {P}_2^\le \): Maximize the probability that \(\lceil \frac{N}{2}\rceil \) jobs are completed within \(\frac{N}{4K}\) time units.

The objectives have been combined as follows: (\({\mathbb {O}_{}}^i\) refers to the objectives considered in Column i of Table 1):

$$\begin{aligned} {\mathbb {O}_{}}^1 = (\mathbb {E}_1, \mathbb {E}_2, \mathbb {E}_3) \quad {\mathbb {O}_{}}^2 = (\mathbb {E}_1, \mathbb {P}^\le _2) \quad {\mathbb {O}_{}}^3 = (\mathbb {P}, \mathbb {E}_1, \mathbb {E}_2, \mathbb {E}_3) \quad {\mathbb {O}_{}}^4 = (\mathbb {P}, \mathbb {E}_3, \mathbb {P}^\le _1, \mathbb {P}^\le _2) \end{aligned}$$

Polling The polling system is based on [48, 50]. It considers two stations, each having a separate queue storing up to K jobs of N different types. The jobs arrive at Station i (for \(i \in \{1,2\}\)) with some rate \(\lambda _i\) as long as the queue of the station is not full. A server polls the two stations and processes the jobs by (nondeterministically) taking a job from a non-empty queue. The time for processing a job is given by a rate which depends on the type of the job. Erasing a job from a queue is unreliable, i.e., there is a \(10\,\%\) chance that an already processed job stays in the queue. For \(i \in \{1,2\}\) we assume the following objectives:

Table 3 Results for our implementation (Storm) and PRISM on the multi-objective MDP benchmarks from [28]. All run-times are in seconds
Table 4 Results for our implementation (Storm) and IMCA for single-objective MAs. All run-times are in seconds
  • \(\mathbb {E}_i\): Maximize the expected number of processed jobs of Station i until its queue is full.

  • \(\mathbb {E}_{2+i}\): Minimize the expected sum of all waiting times of the jobs arriving at Station i until the queue of Station i is full.

  • \(\mathbb {P}^\le _i\): Minimize the probability that the queue of Station i is full within two time units.

The objectives have been combined as follows: (\({\mathbb {O}_{}}^i\) refers to the objectives considered in Column i of Table 1):

$$\begin{aligned} {\mathbb {O}_{}}^1 = (\mathbb {E}_1, \mathbb {E}_2) \quad {\mathbb {O}_{}}^2 = (\mathbb {E}_1, \mathbb {E}_2, \mathbb {E}_3, \mathbb {E}_4) \quad {\mathbb {O}_{}}^3 = (\mathbb {P}^\le _1, \mathbb {P}^\le _2) \quad {\mathbb {O}_{}}^4 = (\mathbb {E}_1, \mathbb {E}_2, \mathbb {P}^\le _1, \mathbb {P}^\le _2) \end{aligned}$$

Stream This case study considers a client of a video streaming platform. The client consecutively receives N data packages and stores them into a buffer. The buffered packages are processed during the playback of the video. The time it takes to receive (or to process) a single package is modeled by an exponentially distributed delay. Whenever a package is received and the video is not playing, the client nondeterministically chooses whether it starts the playback or whether it keeps on buffering. The latter choice is not reliable, i.e., there is a \(1\,\%\) chance that the playback is started anyway. In case of a buffer underrunFootnote 9, the playback is paused and the client waits for new packages to arrive. We analyzed the following objectives:

  • \(\mathbb {E}_1\): Minimize the expected buffering time until the playback is finished.

  • \(\mathbb {E}_2\): Minimize the expected number of buffer underruns during the playback.

  • \(\mathbb {E}_3\): Minimize the expected time to start the playback.

  • \(\mathbb {P}^\le _1\): Minimize the probability for a buffer underrun within 2 time units.

  • \(\mathbb {P}^\le _2\): Maximize the probability that the playback starts within 0.5 time units.

The objectives have been combined as follows: (\({\mathbb {O}_{}}^i\) refers to the objectives considered in Column i of Table 1):

$$\begin{aligned} {\mathbb {O}_{}}^1 = (\mathbb {E}_1, \mathbb {E}_2) \quad {\mathbb {O}_{}}^2 = (\mathbb {E}_3, \mathbb {P}_1^\le ) \quad {\mathbb {O}_{}}^3 = (\mathbb {P}^\le _1, \mathbb {P}^\le _2) \quad {\mathbb {O}_{}}^4 = (\mathbb {E}_1, \mathbb {E}_3, \mathbb {P}^\le _1) \end{aligned}$$

Mutex This case study regards a randomized mutual exclusion protocol based on [42, 50]. Three processes nondeterministically choose a job for which they need to enter the critical section. The amount of time a process spends in its critical section is given by a rate which depends on the chosen job. There are N different types of jobs. For each \(i \in \{1,2,3\}\) the following objective are considered:

  • \(\mathbb {P}^\le _i\): Maximize the probability that Process i enters its critical section within 0.5 time units.

  • \(\mathbb {P}^\le _{3+i}\): Maximize the probability that Process i enters its critical section within 1 time unit.

The objectives have been combined as follows: (\({\mathbb {O}_{}}^i\) refers to the objectives considered in Column i of Table 1):

$$\begin{aligned} {\mathbb {O}_{}}^1 = (\mathbb {P}^\le _1, \mathbb {P}^\le _2,\mathbb {P}^\le _3) \quad {\mathbb {O}_{}}^2 = (\mathbb {P}^\le _4, \mathbb {P}^\le _5,\mathbb {P}^\le _6) \quad \end{aligned}$$

1.2 E.2 Comparison with PRISM

The detailed results of our experiments with PRISM are given in Table 3. We depict the different benchmark instances with the number of states of the MDP (Column #states) and the considered combination of objectives (\(\lozenge \) represents an (untimed) probabilistic objective, \(\text {ER}\) an expected reward objective, and \(\le \) a step-bounded reward objective). We list the number of vertices of the obtained under-approximation (Column pts). Column iter lists the time required for the iterative exploration of the set of achievable points as described in [28]. In Column verif we depict the verification time—including the time for the iterations as well as the conducted preprocessing steps. Column total indicates the total runtime of the tool which includes model building time and verification time.

During our experiments we observed that PRISM does not detect that both objectives considered for the scheduler-instances yield infinite rewards under every possible resolution of non-determinism. As a result, the value iteration-based procedure does not converge and PRISM reports that the maximal number of iterations are exceeded. Storm detects this issue and shows a proper warning to the user.

We further note that PRISM can not compute Pareto curves for more than two objectives. However, it can answer achievability- and numerical queries as introduced in [28] with three or more objectives.

1.3 E.3 Comparison with IMCA

The resulting verification times are given in Table 4. We depict the different benchmark instances with the number of states of the MA (Column #states) and the considered objective (as discussed in App. E.1). Besides the run-times of IMCA, we depict the run-times of our implementation (effectively performing multi-objective model checking with only one objective) in Column Storm (multi). Column Storm (single) shows the run-times obtained when Storm is invoked with standard (single-objective) model checking methods. The latter uses the more recent Unif+ algorithm [15].

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Quatmann, T., Junges, S. & Katoen, JP. Markov automata with multiple objectives. Form Methods Syst Des 60, 33–86 (2022). https://doi.org/10.1007/s10703-021-00364-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10703-021-00364-6

Keywords

Navigation