Header menu link for other important links
X
Analysis of Thompson sampling for stochastic sleeping bandits
A. Chaterjee, , S. Jain, R. Vaish, Y. Narahari
Published in AUAI Press Corvallis
2017
Abstract
We study a variant of the stochastic multiarmed bandit problem where the set of available arms varies arbitrarily with time (also known as the sleeping bandit problem). We focus on the Thompson Sampling algorithm and consider a regret notion defined with respect to the best available arm. Our main result is an O(log T) regret bound for Thompson Sampling, which generalizes a similar bound known for this algorithm from the classical bandit setting. Our bound also matches (up to constants) the best-known lower bound for the sleeping bandit problem. We show via simulations that Thompson Sampling outperforms the UCB-style AUER algorithm for the sleeping bandit problem.
About the journal
JournalUncertainty in Artificial Intelligence - Proceedings of the 33rd Conference, UAI 2017
PublisherAUAI Press Corvallis