Aller au contenu principal

Chen YAN

Poliques quasi-optimal pour les restless bandits

Jeudi 15 décembre 2022

Résumé :
Bandits are one of the most basic examples of decision-making with uncertainty. A Markovian restless bandit can be seen as the following sequential allocation problem: At each decision epoch, one or several arms are activated (pulled); all arms generate an instantaneous reward that depend on their state and their activation; the state of each arm then changes in a Markovian fashion, based on an underlying transition matrix. Both the rewards and the probability matrices are known, and the new state is revealed to the decision maker for its next decision. The word restless serves to emphasize the fact that arms that are not activated can also change states, hence generalizes the simpler rested bandits. In principle, the above problem can be solved by dynamic programming, since it is a Markov decision process. The challenge that we face is the curse of dimension, since the size of possible states and actions grows exponentially with the number of arms of the bandit. Consequently, the focus is to design policies that solve the dilemma of computational efficiency and close-to-optimal performance.
 
In this thesis, we construct computationally efficient policies with provable performance bounds, that may differ depending on certain properties of the problem. We first investigate the classical Whittle index policy (WIP) on infinite horizon problems, and prove that if it is asymptotically optimal under the global attractor assumption, then almost always it converges to the optimal value exponentially fast. The application of WIP has the additional technical assumption of indexability as a prerequisite, to get around this, we next study the LP-index policy, that is well-defined for any problem, and shares the same exponential speed of convergence as WIP under similar assumptions.
 
In infinite horizon, we always need the global attractor assumption for asymptotic optimality. We next study the problem under finite horizon, so that this assumption is no-longer a concern. Instead, the LP-compatibility and the non-degeneracy are required for the asymptotic optimality and a faster convergence rate. We construct the finite horizon LP-index policy, as well as the LP-update policy, that amounts to solving new LP-index policies during the evolution of the process. This latter LP-update policy is then generalized to the broader framework of weakly coupled MDPs, together with the generalization of the non-degenerate condition. This condition also allows a more efficient implementation of the LP-update policy, as well as a faster convergence rate, if it is satisfied on the weakly coupled MDPs. 

Date et Lieu

Jeudi 15 décembre 2022 à 14h00
Amphithéâtre 1 de la Tour IRMA, 51 rue des mathématiques, 38610 Gières

Superviseurs

Nicolas GAST
Bruno GAUJAL

Composition du Jury

David Alan GOLDBERG
Professeur associé, Université de Cornell (Rapporteur)
Bruno SCHERRER
Chargé de recherche, Inria Nancy (Rapporteur)
Jérôme MALICK
Directeur de recherche,  CNRS (Examinateur)
Nguyễn KIM THANG
Professeur, Université Grenoble Alpes (Examinateur)
Benjamin LEGROS
Professeur associé, EM Normandie (Examinateur)

Publié le 9 décembre 2022

Mis à jour le 9 décembre 2022