Chapter 1
Markov Decision Processes 1
1.1. Introduction
This book presents a decision problem type commonly called sequential decision problems under uncertainty. The first feature of such problems resides in the relation between the current decision and future decisions. Indeed, these problems do not consist of one, but several decision problems, presented in a sequence. At each step of this sequence, the agent (actor or decision-maker) needs to decide on the current action by taking into account its effect on the solution of future problems. This sequential feature is also typical of planning problems in artificial intelligence and is often linked with shortest path methods in graph theory. The second characteristic of the problems discussed in these pages is the uncertainty in the consequences of all available decisions (actions). Knowledge of its decision’s effects is not available in advance to the agent in a deterministic form. As such, this problem deals with the various theories of decision under uncertainty which suggest different formalization and resolution approaches. Among these approaches, we need to mention specifically the standard theory of expected utility maximization.
Consequently, problems of sequential decision under uncertainty couple the two problematics of sequential decision and decision under uncertainty. Markov decision problems (MDPs) are a general mathematical formalism for representing shortest path problems in stochastic environments. This formalism ...
Get Markov Decision Processes in Artificial Intelligence now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.