%0 Conference Proceedings %F qest13a %A Akshay, S %A Bertrand, N. %A S., Haddad %A Hélouet, L. %T The steady-state control problem for Markov decision processes %B 10th International Conference on Quantitative Evaluation of SysTems (QEST'13) %V 8054 %P 390-304 %S LNCS %C Buenos Aires, Argentina %X This paper addresses a control problem for probabilistic models in the setting of Markov decision processes (MDP). We are interested in the steady-state control problem which asks, given an ergodic MDP M and a distribution \delta, whether there exists a (history-dependent randomized) policy \pi ensuring that the steady-state distribution of M under \pi is exactly \delta. We first show that stationary randomized policies suffice to achieve a given steady-state distribution. Then we infer that the steady-state control problem is decidable for MDP, and can be represented as a linear program which is solvable in PTIME. This decidability result extends to labeled MDP (LMDP) where the objective is a steady-state distribution on labels carried by the states, and we provide a PSPACE algorithm. We also show that a related steady-state language inclusion problem is decidable in EXPTIME for LMDP. Finally, we prove that if we consider MDP under partial observation (POMDP), the steady-state control problem becomes undecidable %U http://www.irisa.fr/sumo/Publis/PDF/qest13a.pdf %8 August %D 2013