Text
Stochastic Control Theory
The purpose of this book is to provide an introduction to stochastic controls theory,
via the method of dynamic programming. The dynamic programming principle,
originated by R. Bellman in 1950s, is known as the two stage optimization
procedure. When we control the behavior of a stochastic dynamical system in order
to optimize some payoff or cost function, which depends on the control inputs to
the system, the dynamic programming principle gives a powerful tool to analyze
problems. Exploiting the dependence of the value function (optimal payoff) on
its terminal cost function, we will construct a nonlinear semigroup which allows
one to formulate the dynamic programming principle and whose generator provides
the Hamilton–Jacobi–Bellman equation. Here we are mainly concerned with finite
time horizon stochastic controls. We also apply the semigroup approach to controlstopping problems and stochastic differential games, and provide with examples
from the area of financial market models.
No copy data
No other version available