Detection of Premature Ventricular Contractions Using Densely Connected Deep Convolutional Neural Network with Spatial Pyramid Pooling Layer

Jun 27, 2018

Jianning Li

Jun 27, 2018

Jianning Li

* 7 figures, 4 Tables

**Click to Read Paper**

Classification and its applications for drug-target interaction identification

Mar 12, 2015

Jian-Ping Mei, Chee-Keong Kwoh, Peng Yang, Xiao-Li Li

Mar 12, 2015

Jian-Ping Mei, Chee-Keong Kwoh, Peng Yang, Xiao-Li Li

**Click to Read Paper**

A Simple Proximal Stochastic Gradient Method for Nonsmooth Nonconvex Optimization

Oct 27, 2018

Zhize Li, Jian Li

We analyze stochastic gradient algorithms for optimizing nonconvex, nonsmooth finite-sum problems. In particular, the objective function is given by the summation of a differentiable (possibly nonconvex) component, together with a possibly non-differentiable but convex component. We propose a proximal stochastic gradient algorithm based on variance reduction, called ProxSVRG+. Our main contribution lies in the analysis of ProxSVRG+. It recovers several existing convergence results and improves/generalizes them (in terms of the number of stochastic gradient oracle calls and proximal oracle calls). In particular, ProxSVRG+ generalizes the best results given by the SCSG algorithm, recently proposed by [Lei et al., NIPS'17] for the smooth nonconvex case. ProxSVRG+ is also more straightforward than SCSG and yields simpler analysis. Moreover, ProxSVRG+ outperforms the deterministic proximal gradient descent (ProxGD) for a wide range of minibatch sizes, which partially solves an open problem proposed in [Reddi et al., NIPS'16]. Also, ProxSVRG+ uses much less proximal oracle calls than ProxSVRG [Reddi et al., NIPS'16]. Moreover, for nonconvex functions satisfied Polyak-\L{}ojasiewicz condition, we prove that ProxSVRG+ achieves a global linear convergence rate without restart unlike ProxSVRG. Thus, it can \emph{automatically} switch to the faster linear convergence in some regions as long as the objective function satisfies the PL condition locally in these regions. ProxSVRG+ also improves ProxGD and ProxSVRG/SAGA, and generalizes the results of SCSG in this case. Finally, we conduct several experiments and the experimental results are consistent with the theoretical results.
Oct 27, 2018

Zhize Li, Jian Li

* 32nd Conference on Neural Information Processing Systems (NIPS 2018)

**Click to Read Paper**

* 20 pages

**Click to Read Paper**

* 9 pages, 7 figures

**Click to Read Paper**

Towards Instance Optimal Bounds for Best Arm Identification

May 24, 2017

Lijie Chen, Jian Li, Mingda Qiao

In the classical best arm identification (Best-$1$-Arm) problem, we are given $n$ stochastic bandit arms, each associated with a reward distribution with an unknown mean. We would like to identify the arm with the largest mean with probability at least $1-\delta$, using as few samples as possible. Understanding the sample complexity of Best-$1$-Arm has attracted significant attention since the last decade. However, the exact sample complexity of the problem is still unknown. Recently, Chen and Li made the gap-entropy conjecture concerning the instance sample complexity of Best-$1$-Arm. Given an instance $I$, let $\mu_{[i]}$ be the $i$th largest mean and $\Delta_{[i]}=\mu_{[1]}-\mu_{[i]}$ be the corresponding gap. $H(I)=\sum_{i=2}^n\Delta_{[i]}^{-2}$ is the complexity of the instance. The gap-entropy conjecture states that $\Omega\left(H(I)\cdot\left(\ln\delta^{-1}+\mathsf{Ent}(I)\right)\right)$ is an instance lower bound, where $\mathsf{Ent}(I)$ is an entropy-like term determined by the gaps, and there is a $\delta$-correct algorithm for Best-$1$-Arm with sample complexity $O\left(H(I)\cdot\left(\ln\delta^{-1}+\mathsf{Ent}(I)\right)+\Delta_{[2]}^{-2}\ln\ln\Delta_{[2]}^{-1}\right)$. If the conjecture is true, we would have a complete understanding of the instance-wise sample complexity of Best-$1$-Arm. We make significant progress towards the resolution of the gap-entropy conjecture. For the upper bound, we provide a highly nontrivial algorithm which requires \[O\left(H(I)\cdot\left(\ln\delta^{-1} +\mathsf{Ent}(I)\right)+\Delta_{[2]}^{-2}\ln\ln\Delta_{[2]}^{-1}\mathrm{polylog}(n,\delta^{-1})\right)\] samples in expectation. For the lower bound, we show that for any Gaussian Best-$1$-Arm instance with gaps of the form $2^{-k}$, any $\delta$-correct monotone algorithm requires $\Omega\left(H(I)\cdot\left(\ln\delta^{-1} + \mathsf{Ent}(I)\right)\right)$ samples in expectation.
May 24, 2017

Lijie Chen, Jian Li, Mingda Qiao

* Accepted to COLT 2017

**Click to Read Paper**

**Click to Read Paper**

Open Problem: Best Arm Identification: Almost Instance-Wise Optimality and the Gap Entropy Conjecture

May 27, 2016

Lijie Chen, Jian Li

The best arm identification problem (BEST-1-ARM) is the most basic pure exploration problem in stochastic multi-armed bandits. The problem has a long history and attracted significant attention for the last decade. However, we do not yet have a complete understanding of the optimal sample complexity of the problem: The state-of-the-art algorithms achieve a sample complexity of $O(\sum_{i=2}^{n} \Delta_{i}^{-2}(\ln\delta^{-1} + \ln\ln\Delta_i^{-1}))$ ($\Delta_{i}$ is the difference between the largest mean and the $i^{th}$ mean), while the best known lower bound is $\Omega(\sum_{i=2}^{n} \Delta_{i}^{-2}\ln\delta^{-1})$ for general instances and $\Omega(\Delta^{-2} \ln\ln \Delta^{-1})$ for the two-arm instances. We propose to study the instance-wise optimality for the BEST-1-ARM problem. Previous work has proved that it is impossible to have an instance optimal algorithm for the 2-arm problem. However, we conjecture that modulo the additive term $\Omega(\Delta_2^{-2} \ln\ln \Delta_2^{-1})$ (which is an upper bound and worst case lower bound for the 2-arm problem), there is an instance optimal algorithm for BEST-1-ARM. Moreover, we introduce a new quantity, called the gap entropy for a best-arm problem instance, and conjecture that it is the instance-wise lower bound. Hence, resolving this conjecture would provide a final answer to the old and basic problem.
May 27, 2016

Lijie Chen, Jian Li

* To appear in COLT 2016 Open Problems

**Click to Read Paper**

**Click to Read Paper**

**Click to Read Paper**

Stochastic Gradient Hamiltonian Monte Carlo with Variance Reduction for Bayesian Inference

Mar 29, 2018

Zhize Li, Tianyi Zhang, Jian Li

Mar 29, 2018

Zhize Li, Tianyi Zhang, Jian Li

* 20 pages

**Click to Read Paper**

On Generalization Error Bounds of Noisy Gradient Methods for Non-Convex Learning

Feb 02, 2019

Jian Li, Xuanyuan Luo, Mingda Qiao

Feb 02, 2019

Jian Li, Xuanyuan Luo, Mingda Qiao

**Click to Read Paper**

Max-Diversity Distributed Learning: Theory and Algorithms

Jan 18, 2019

Yong Liu, Jian Li, Weiping Wang

Jan 18, 2019

Yong Liu, Jian Li, Weiping Wang

**Click to Read Paper**

Multi-scale 3D Convolution Network for Video Based Person Re-Identification

Nov 19, 2018

Jianing Li, Shiliang Zhang, Tiejun Huang

Nov 19, 2018

Jianing Li, Shiliang Zhang, Tiejun Huang

* AAAI, 2019

**Click to Read Paper**

Learning Spectral Transform Network on 3D Surface for Non-rigid Shape Analysis

Oct 21, 2018

Ruixuan Yu, Jian Sun, Huibin Li

Oct 21, 2018

Ruixuan Yu, Jian Sun, Huibin Li

* 16 pages, 3 figures

**Click to Read Paper**

Triple Attention Mixed Link Network for Single Image Super Resolution

Oct 08, 2018

Xi Cheng, Xiang Li, Jian Yang

Oct 08, 2018

Xi Cheng, Xiang Li, Jian Yang

**Click to Read Paper**

SVM via Saddle Point Optimization: New Bounds and Distributed Algorithms

Jan 28, 2018

Yifei Jin, Lingxiao Huang, Jian Li

Jan 28, 2018

Yifei Jin, Lingxiao Huang, Jian Li

**Click to Read Paper**

Learning Gradient Descent: Better Generalization and Longer Horizons

Jun 10, 2017

Kaifeng Lv, Shunhua Jiang, Jian Li

Jun 10, 2017

Kaifeng Lv, Shunhua Jiang, Jian Li

* Accepted to ICML 2017, 9 pages, 9 figures, 4 tables

**Click to Read Paper**

Practical Algorithms for Best-K Identification in Multi-Armed Bandits

May 19, 2017

Haotian Jiang, Jian Li, Mingda Qiao

May 19, 2017

Haotian Jiang, Jian Li, Mingda Qiao

**Click to Read Paper**

Nearly Instance Optimal Sample Complexity Bounds for Top-k Arm Selection

Feb 13, 2017

Lijie Chen, Jian Li, Mingda Qiao

Feb 13, 2017

Lijie Chen, Jian Li, Mingda Qiao

* Accepted by AISTATS 2017

**Click to Read Paper**