A Bandit Approach to Multiple Testing with False Discovery Control

Sep 06, 2018

Kevin Jamieson, Lalit Jain

We propose an adaptive sampling approach for multiple testing which aims to maximize statistical power while ensuring anytime false discovery control. We consider $n$ distributions whose means are partitioned by whether they are below or equal to a baseline (nulls), versus above the baseline (actual positives). In addition, each distribution can be sequentially and repeatedly sampled. Inspired by the multi-armed bandit literature, we provide an algorithm that takes as few samples as possible to exceed a target true positive proportion (i.e. proportion of actual positives discovered) while giving anytime control of the false discovery proportion (nulls predicted as actual positives). Our sample complexity results match known information theoretic lower bounds and through simulations we show a substantial performance improvement over uniform sampling and an adaptive elimination style algorithm. Given the simplicity of the approach, and its sample efficiency, the method has promise for wide adoption in the biological sciences, clinical testing for drug discovery, and online A/B/n testing problems.
Sep 06, 2018

Kevin Jamieson, Lalit Jain

**Click to Read Paper**

Non-stochastic Best Arm Identification and Hyperparameter Optimization

Feb 27, 2015

Kevin Jamieson, Ameet Talwalkar

Feb 27, 2015

Kevin Jamieson, Ameet Talwalkar

**Click to Read Paper**

Pure-Exploration for Infinite-Armed Bandits with General Arm Reservoirs

Nov 15, 2018

Maryam Aziz, Kevin Jamieson, Javed Aslam

Nov 15, 2018

Maryam Aziz, Kevin Jamieson, Javed Aslam

**Click to Read Paper**

Open Loop Hyperparameter Optimization and Determinantal Point Processes

Feb 14, 2018

Jesse Dodge, Kevin Jamieson, Noah A. Smith

Feb 14, 2018

Jesse Dodge, Kevin Jamieson, Noah A. Smith

**Click to Read Paper**

The Simulator: Understanding Adaptive Sampling in the Moderate-Confidence Regime

Feb 16, 2017

Max Simchowitz, Kevin Jamieson, Benjamin Recht

Feb 16, 2017

Max Simchowitz, Kevin Jamieson, Benjamin Recht

**Click to Read Paper**

Finite Sample Prediction and Recovery Bounds for Ordinal Embedding

Jun 22, 2016

Lalit Jain, Kevin Jamieson, Robert Nowak

Jun 22, 2016

Lalit Jain, Kevin Jamieson, Robert Nowak

**Click to Read Paper**

On the Detection of Mixture Distributions with applications to the Most Biased Coin Problem

Mar 25, 2016

Kevin Jamieson, Daniel Haas, Ben Recht

Mar 25, 2016

Kevin Jamieson, Daniel Haas, Ben Recht

**Click to Read Paper**

**Click to Read Paper**

**Click to Read Paper**

**Click to Read Paper**

lil' UCB : An Optimal Exploration Algorithm for Multi-Armed Bandits

Dec 27, 2013

Kevin Jamieson, Matthew Malloy, Robert Nowak, SÃ©bastien Bubeck

Dec 27, 2013

Kevin Jamieson, Matthew Malloy, Robert Nowak, SÃ©bastien Bubeck

**Click to Read Paper**

On Finding the Largest Mean Among Many

Jun 17, 2013

Kevin Jamieson, Matthew Malloy, Robert Nowak, Sebastien Bubeck

Jun 17, 2013

Kevin Jamieson, Matthew Malloy, Robert Nowak, Sebastien Bubeck

**Click to Read Paper**

Query Complexity of Derivative-Free Optimization

Sep 11, 2012

Kevin G. Jamieson, Robert D. Nowak, Benjamin Recht

This paper provides lower bounds on the convergence rate of Derivative Free Optimization (DFO) with noisy function evaluations, exposing a fundamental and unavoidable gap between the performance of algorithms with access to gradients and those with access to only function evaluations. However, there are situations in which DFO is unavoidable, and for such situations we propose a new DFO algorithm that is proved to be near optimal for the class of strongly convex objective functions. A distinctive feature of the algorithm is that it uses only Boolean-valued function comparisons, rather than function evaluations. This makes the algorithm useful in an even wider range of applications, such as optimization based on paired comparisons from human subjects, for example. We also show that regardless of whether DFO is based on noisy function evaluations or Boolean-valued function comparisons, the convergence rate is the same.
Sep 11, 2012

Kevin G. Jamieson, Robert D. Nowak, Benjamin Recht

**Click to Read Paper**

A framework for Multi-A(rmed)/B(andit) testing with online FDR control

Nov 18, 2017

Fanny Yang, Aaditya Ramdas, Kevin Jamieson, Martin J. Wainwright

Nov 18, 2017

Fanny Yang, Aaditya Ramdas, Kevin Jamieson, Martin J. Wainwright

**Click to Read Paper**

Adaptive Sampling for Convex Regression

Aug 26, 2018

Max Simchowitz, Kevin Jamieson, Jordan W. Suchow, Thomas L. Griffiths

Aug 26, 2018

Max Simchowitz, Kevin Jamieson, Jordan W. Suchow, Thomas L. Griffiths

**Click to Read Paper**

Hyperband: A Novel Bandit-Based Approach to Hyperparameter Optimization

Jun 18, 2018

Lisha Li, Kevin Jamieson, Giulia DeSalvo, Afshin Rostamizadeh, Ameet Talwalkar

Jun 18, 2018

Lisha Li, Kevin Jamieson, Giulia DeSalvo, Afshin Rostamizadeh, Ameet Talwalkar

**Click to Read Paper**

Massively Parallel Hyperparameter Tuning

Oct 17, 2018

Liam Li, Kevin Jamieson, Afshin Rostamizadeh, Ekaterina Gonina, Moritz Hardt, Benjamin Recht, Ameet Talwalkar

Oct 17, 2018

Liam Li, Kevin Jamieson, Afshin Rostamizadeh, Ekaterina Gonina, Moritz Hardt, Benjamin Recht, Ameet Talwalkar

**Click to Read Paper**

Comparing Human-Centric and Robot-Centric Sampling for Robot Deep Learning from Demonstrations

Mar 29, 2017

Michael Laskey, Caleb Chuck, Jonathan Lee, Jeffrey Mahler, Sanjay Krishnan, Kevin Jamieson, Anca Dragan, Ken Goldberg

Mar 29, 2017

Michael Laskey, Caleb Chuck, Jonathan Lee, Jeffrey Mahler, Sanjay Krishnan, Kevin Jamieson, Anca Dragan, Ken Goldberg

**Click to Read Paper**