Strong-order conditions of Runge-Kutta method for stochastic optimal control problems
Introduction
Stochastic optimal control theory has been challenging over many years in science, engineering, finance and economics. Since it is difficult to find the analytical solutions of stochastic optimal control problems, recent works have been concentrated on the numerical solutions. Recently, there have been many investigations related to stochastic Runge-Kutta methods [2], [3], [4], [13], [14]. Runge-Kutta method is one of the most famous numerical schemes for the numerical solution of deterministic optimal control problems [1], [6], [8], [10], [11]. It is for the first time that Runge-Kutta scheme was applied to stochastic optimal control problems [16].
We let be a 1-dimensional Brownian motion on the filtered probability space , where and is a fixed finite interval. On this probability space, the space of real-valued square-integrable -adapted processes is defined over .
We consider a controlled stochastic differential equation (SDE) where f and h are continuously differentiable functions with respect to and y, respectively, and their derivatives are uniformly bounded. Under these assumptions, we assure that Eqn. (1) has a unique solution [9]. Also, is a control process in , which is a closed convex set in the control space .
Here, we note that the diffusion term does not contain the control process. However, in the general case the control process could appear in the diffusion term.
Moreover, the objective of the optimal control problem is to minimize the cost functional where ϕ and g are continuously differentiable functions. A control process that solves this problem is called an optimal control.
In [16], we have obtained the Runge-Kutta scheme for stochastic optimal control problems by using discretize-then-optimize approach. Whenever Runge-Kutta discretization of the state variable y is given, Runge-Kutta discretization of the adjoint pair is obtained by means of Lagrangian method. The beauty of this method is that the Runge-Kutta discretization of q is derived directly.
Furthermore, it is important to measure the accuracy of the Runge-Kutta scheme by either using the strong-order convergence or weak-order convergence criteria. In this work, our aim is to get strong-order conditions of the Runge-Kutta scheme for stochastic optimal control problems.
We let be a numerical approximation to after N steps with constant step size ; then is said to converge strongly to X with order if there exists a constant , which does not depend on Δ, and a such that Here, by assuming exact initial values, Stratonovich-Taylor expansions of the exact solution and the solution based on the Runge-Kutta scheme are compared to find the order of accuracy. In the Runge-Kutta scheme for stochastic optimal control problems, Runge-Kutta coefficients of the adjoint process have been obtained in terms of the Runge-Kutta coefficients of the state process in [16]. This yields additional order conditions to classical Runge-Kutta method of SDEs [2], [3] for the strong-order of accuracy. In this work, such order conditions are derived explicitly.
The paper is organized as follows. In Section 2, we give the optimality conditions and discretization with Runge-Kutta scheme. Then, we state the Runge-Kutta scheme and obtain the strong order-1.5 conditions in Section 3. In Section 4, we give a numerical experiment to confirm convergence order. Finally, we conclude and give an outlook to future studies.
Section snippets
Stochastic optimal control problem and discretization
Now, we state our optimal control problem as: We assume that f, h, ϕ and g are continuously differentiable functions and the problem has a unique solution [17].
Strong-order conditions of the Runge-Kutta scheme for the stochastic optimal control problems
Now, we have a discrete-state equation and a discrete-cost functional. In the following proposition, we get the discrete optimality conditions of . Proposition 3.1 [16] If and , , in the system problem , then the discrete first-order optimality conditions of problem are obtained as
Numerical application
In this section, we choose a numerical example whose exact solution is known. Herewith, convergence rates can be computed explicitly. To solve the optimization problem, we employ gradient-descent method with a stopping criterion . We also use 1000 paths of Monte-Carlo simulation.
Example We consider the following Black-Scholes type of optimal control problem [7]: where is a constant and is given. Also
Conclusion and outlook
In this study, we have derived strong-order conditions of the Runge-Kutta scheme for stochastic optimal control problems. For this reason, we have compared Stratonovich-Taylor expansion of the exact solution and Stratonovich-Taylor expansion of the approximation method defined by the Runge-Kutta scheme successively. It is the first time that stochastic Runge-Kutta scheme applied to optimal control of SDEs is analyzed for the strong order-1.5 conditions. Compared to the Runge-Kutta schemes of
References (17)
- et al.
High strong order explicit Runge-Kutta methods for stochastic ordinary differential equations
Appl. Numer. Math.
(1996) - et al.
Computation of order conditions for symplectic partitioned Runge-Kutta schemes with application to optimal control
Numer. Math.
(2006) Runge-Kutta Methods for Stochastic Differential Equations
(1999)- et al.
Order conditions of stochastic Runge-Kutta methods by B-series
SIAM J. Numer. Anal.
(2000) Numerical Methods for Ordinary Differential Equations
(2003)- et al.
Second-order Runge-Kutta approximations in control constrained optimal control
SIAM J. Numer. Anal.
(2001) - et al.
An effective gradient projection method for stochastic optimal control
Int. J. Numer. Anal. Model.
(2013) Runge-Kutta methods in optimal control and the transformed adjoint system
Numer. Math.
(2000)