Alert button
Picture for Pierre Menard

Pierre Menard

Alert button

IMT

Model-free Posterior Sampling via Learning Rate Randomization

Add code
Bookmark button
Alert button
Oct 27, 2023
Daniil Tiapkin, Denis Belomestny, Daniele Calandriello, Eric Moulines, Remi Munos, Alexey Naumov, Pierre Perrault, Michal Valko, Pierre Menard

Figure 1 for Model-free Posterior Sampling via Learning Rate Randomization
Figure 2 for Model-free Posterior Sampling via Learning Rate Randomization
Figure 3 for Model-free Posterior Sampling via Learning Rate Randomization
Figure 4 for Model-free Posterior Sampling via Learning Rate Randomization
Viaarxiv icon

Demonstration-Regularized RL

Add code
Bookmark button
Alert button
Oct 26, 2023
Daniil Tiapkin, Denis Belomestny, Daniele Calandriello, Eric Moulines, Alexey Naumov, Pierre Perrault, Michal Valko, Pierre Menard

Viaarxiv icon

Sharp Deviations Bounds for Dirichlet Weighted Sums with Application to analysis of Bayesian algorithms

Add code
Bookmark button
Alert button
Apr 06, 2023
Denis Belomestny, Pierre Menard, Alexey Naumov, Daniil Tiapkin, Michal Valko

Viaarxiv icon

Fast Rates for Maximum Entropy Exploration

Add code
Bookmark button
Alert button
Mar 14, 2023
Daniil Tiapkin, Denis Belomestny, Daniele Calandriello, Eric Moulines, Remi Munos, Alexey Naumov, Pierre Perrault, Yunhao Tang, Michal Valko, Pierre Menard

Figure 1 for Fast Rates for Maximum Entropy Exploration
Figure 2 for Fast Rates for Maximum Entropy Exploration
Figure 3 for Fast Rates for Maximum Entropy Exploration
Figure 4 for Fast Rates for Maximum Entropy Exploration
Viaarxiv icon

Optimistic Posterior Sampling for Reinforcement Learning with Few Samples and Tight Guarantees

Add code
Bookmark button
Alert button
Sep 28, 2022
Daniil Tiapkin, Denis Belomestny, Daniele Calandriello, Eric Moulines, Remi Munos, Alexey Naumov, Mark Rowland, Michal Valko, Pierre Menard

Figure 1 for Optimistic Posterior Sampling for Reinforcement Learning with Few Samples and Tight Guarantees
Figure 2 for Optimistic Posterior Sampling for Reinforcement Learning with Few Samples and Tight Guarantees
Figure 3 for Optimistic Posterior Sampling for Reinforcement Learning with Few Samples and Tight Guarantees
Figure 4 for Optimistic Posterior Sampling for Reinforcement Learning with Few Samples and Tight Guarantees
Viaarxiv icon

From Dirichlet to Rubin: Optimistic Exploration in RL without Bonuses

Add code
Bookmark button
Alert button
May 16, 2022
Daniil Tiapkin, Denis Belomestny, Eric Moulines, Alexey Naumov, Sergey Samsonov, Yunhao Tang, Michal Valko, Pierre Menard

Figure 1 for From Dirichlet to Rubin: Optimistic Exploration in RL without Bonuses
Figure 2 for From Dirichlet to Rubin: Optimistic Exploration in RL without Bonuses
Figure 3 for From Dirichlet to Rubin: Optimistic Exploration in RL without Bonuses
Figure 4 for From Dirichlet to Rubin: Optimistic Exploration in RL without Bonuses
Viaarxiv icon

UCB Momentum Q-learning: Correcting the bias without forgetting

Add code
Bookmark button
Alert button
Mar 01, 2021
Pierre Menard, Omar Darwiche Domingues, Xuedong Shang, Michal Valko

Figure 1 for UCB Momentum Q-learning: Correcting the bias without forgetting
Figure 2 for UCB Momentum Q-learning: Correcting the bias without forgetting
Figure 3 for UCB Momentum Q-learning: Correcting the bias without forgetting
Viaarxiv icon

The Influence of Shape Constraints on the Thresholding Bandit Problem

Add code
Bookmark button
Alert button
Jun 17, 2020
James Cheshire, Pierre Menard, Alexandra Carpentier

Figure 1 for The Influence of Shape Constraints on the Thresholding Bandit Problem
Figure 2 for The Influence of Shape Constraints on the Thresholding Bandit Problem
Figure 3 for The Influence of Shape Constraints on the Thresholding Bandit Problem
Viaarxiv icon

Thresholding Bandit for Dose-ranging: The Impact of Monotonicity

Add code
Bookmark button
Alert button
Jul 24, 2018
Aurélien Garivier, Pierre Ménard, Laurent Rossi, Pierre Menard

Figure 1 for Thresholding Bandit for Dose-ranging: The Impact of Monotonicity
Figure 2 for Thresholding Bandit for Dose-ranging: The Impact of Monotonicity
Figure 3 for Thresholding Bandit for Dose-ranging: The Impact of Monotonicity
Viaarxiv icon

KL-UCB-switch: optimal regret bounds for stochastic bandits from both a distribution-dependent and a distribution-free viewpoints

Add code
Bookmark button
Alert button
May 14, 2018
Aurélien Garivier, Hédi Hadiji, Pierre Menard, Gilles Stoltz

Figure 1 for KL-UCB-switch: optimal regret bounds for stochastic bandits from both a distribution-dependent and a distribution-free viewpoints
Figure 2 for KL-UCB-switch: optimal regret bounds for stochastic bandits from both a distribution-dependent and a distribution-free viewpoints
Viaarxiv icon