Detection of Premature Ventricular Contractions Using Densely Connected Deep Convolutional Neural Network with Spatial Pyramid Pooling Layer

Jun 27, 2018

Jianning Li

Jun 27, 2018

Jianning Li

* 7 figures, 4 Tables

**Click to Read Paper and Get Code**

Classification and its applications for drug-target interaction identification

Mar 12, 2015

Jian-Ping Mei, Chee-Keong Kwoh, Peng Yang, Xiao-Li Li

Mar 12, 2015

Jian-Ping Mei, Chee-Keong Kwoh, Peng Yang, Xiao-Li Li

**Click to Read Paper and Get Code**

A Simple Proximal Stochastic Gradient Method for Nonsmooth Nonconvex Optimization

Oct 27, 2018

Zhize Li, Jian Li

Oct 27, 2018

Zhize Li, Jian Li

* 32nd Conference on Neural Information Processing Systems (NIPS 2018)

**Click to Read Paper and Get Code**

* 20 pages

**Click to Read Paper and Get Code**

* 9 pages, 7 figures

**Click to Read Paper and Get Code**

Towards Instance Optimal Bounds for Best Arm Identification

May 24, 2017

Lijie Chen, Jian Li, Mingda Qiao

In the classical best arm identification (Best-$1$-Arm) problem, we are given $n$ stochastic bandit arms, each associated with a reward distribution with an unknown mean. We would like to identify the arm with the largest mean with probability at least $1-\delta$, using as few samples as possible. Understanding the sample complexity of Best-$1$-Arm has attracted significant attention since the last decade. However, the exact sample complexity of the problem is still unknown. Recently, Chen and Li made the gap-entropy conjecture concerning the instance sample complexity of Best-$1$-Arm. Given an instance $I$, let $\mu_{[i]}$ be the $i$th largest mean and $\Delta_{[i]}=\mu_{[1]}-\mu_{[i]}$ be the corresponding gap. $H(I)=\sum_{i=2}^n\Delta_{[i]}^{-2}$ is the complexity of the instance. The gap-entropy conjecture states that $\Omega\left(H(I)\cdot\left(\ln\delta^{-1}+\mathsf{Ent}(I)\right)\right)$ is an instance lower bound, where $\mathsf{Ent}(I)$ is an entropy-like term determined by the gaps, and there is a $\delta$-correct algorithm for Best-$1$-Arm with sample complexity $O\left(H(I)\cdot\left(\ln\delta^{-1}+\mathsf{Ent}(I)\right)+\Delta_{[2]}^{-2}\ln\ln\Delta_{[2]}^{-1}\right)$. If the conjecture is true, we would have a complete understanding of the instance-wise sample complexity of Best-$1$-Arm. We make significant progress towards the resolution of the gap-entropy conjecture. For the upper bound, we provide a highly nontrivial algorithm which requires \[O\left(H(I)\cdot\left(\ln\delta^{-1} +\mathsf{Ent}(I)\right)+\Delta_{[2]}^{-2}\ln\ln\Delta_{[2]}^{-1}\mathrm{polylog}(n,\delta^{-1})\right)\] samples in expectation. For the lower bound, we show that for any Gaussian Best-$1$-Arm instance with gaps of the form $2^{-k}$, any $\delta$-correct monotone algorithm requires $\Omega\left(H(I)\cdot\left(\ln\delta^{-1} + \mathsf{Ent}(I)\right)\right)$ samples in expectation.
May 24, 2017

Lijie Chen, Jian Li, Mingda Qiao

* Accepted to COLT 2017

**Click to Read Paper and Get Code**

Gradient Descent Maximizes the Margin of Homogeneous Neural Networks

Jun 13, 2019

Kaifeng Lyu, Jian Li

Jun 13, 2019

Kaifeng Lyu, Jian Li

* 35 pages, 7 figures

**Click to Read Paper and Get Code**

**Click to Read Paper and Get Code**

Open Problem: Best Arm Identification: Almost Instance-Wise Optimality and the Gap Entropy Conjecture

May 27, 2016

Lijie Chen, Jian Li

The best arm identification problem (BEST-1-ARM) is the most basic pure exploration problem in stochastic multi-armed bandits. The problem has a long history and attracted significant attention for the last decade. However, we do not yet have a complete understanding of the optimal sample complexity of the problem: The state-of-the-art algorithms achieve a sample complexity of $O(\sum_{i=2}^{n} \Delta_{i}^{-2}(\ln\delta^{-1} + \ln\ln\Delta_i^{-1}))$ ($\Delta_{i}$ is the difference between the largest mean and the $i^{th}$ mean), while the best known lower bound is $\Omega(\sum_{i=2}^{n} \Delta_{i}^{-2}\ln\delta^{-1})$ for general instances and $\Omega(\Delta^{-2} \ln\ln \Delta^{-1})$ for the two-arm instances. We propose to study the instance-wise optimality for the BEST-1-ARM problem. Previous work has proved that it is impossible to have an instance optimal algorithm for the 2-arm problem. However, we conjecture that modulo the additive term $\Omega(\Delta_2^{-2} \ln\ln \Delta_2^{-1})$ (which is an upper bound and worst case lower bound for the 2-arm problem), there is an instance optimal algorithm for BEST-1-ARM. Moreover, we introduce a new quantity, called the gap entropy for a best-arm problem instance, and conjecture that it is the instance-wise lower bound. Hence, resolving this conjecture would provide a final answer to the old and basic problem.
May 27, 2016

Lijie Chen, Jian Li

* To appear in COLT 2016 Open Problems

**Click to Read Paper and Get Code**

**Click to Read Paper and Get Code**

HPILN: A feature learning framework for cross-modality person re-identification

Jun 07, 2019

Jian-Wu Lin, Hao Li

Jun 07, 2019

Jian-Wu Lin, Hao Li

**Click to Read Paper and Get Code**

Policy Search by Target Distribution Learning for Continuous Control

May 27, 2019

Chuheng Zhang, Yuanqi Li, Jian Li

May 27, 2019

Chuheng Zhang, Yuanqi Li, Jian Li

**Click to Read Paper and Get Code**

**Click to Read Paper and Get Code**

Stochastic Gradient Hamiltonian Monte Carlo with Variance Reduction for Bayesian Inference

Mar 29, 2018

Zhize Li, Tianyi Zhang, Jian Li

Mar 29, 2018

Zhize Li, Tianyi Zhang, Jian Li

* 20 pages

**Click to Read Paper and Get Code**

* 21 pages, 6 figures

**Click to Read Paper and Get Code**

Spatial Group-wise Enhance: Improving Semantic Feature Learning in Convolutional Networks

May 25, 2019

Xiang Li, Xiaolin Hu, Jian Yang

May 25, 2019

Xiang Li, Xiaolin Hu, Jian Yang

* Code available at: https://github.com/implus/PytorchInsight

**Click to Read Paper and Get Code**

On Generalization Error Bounds of Noisy Gradient Methods for Non-Convex Learning

Feb 02, 2019

Jian Li, Xuanyuan Luo, Mingda Qiao

Feb 02, 2019

Jian Li, Xuanyuan Luo, Mingda Qiao

**Click to Read Paper and Get Code**

Max-Diversity Distributed Learning: Theory and Algorithms

Jan 18, 2019

Yong Liu, Jian Li, Weiping Wang

Jan 18, 2019

Yong Liu, Jian Li, Weiping Wang

**Click to Read Paper and Get Code**

Multi-scale 3D Convolution Network for Video Based Person Re-Identification

Nov 19, 2018

Jianing Li, Shiliang Zhang, Tiejun Huang

Nov 19, 2018

Jianing Li, Shiliang Zhang, Tiejun Huang

* AAAI, 2019

**Click to Read Paper and Get Code**

Learning Spectral Transform Network on 3D Surface for Non-rigid Shape Analysis

Oct 21, 2018

Ruixuan Yu, Jian Sun, Huibin Li

Oct 21, 2018

Ruixuan Yu, Jian Sun, Huibin Li

* 16 pages, 3 figures

**Click to Read Paper and Get Code**