LightLDA: Big Topic Models on Modest Compute Clusters

Dec 04, 2014

Jinhui Yuan, Fei Gao, Qirong Ho, Wei Dai, Jinliang Wei, Xun Zheng, Eric P. Xing, Tie-Yan Liu, Wei-Ying Ma

Dec 04, 2014

Jinhui Yuan, Fei Gao, Qirong Ho, Wei Dai, Jinliang Wei, Xun Zheng, Eric P. Xing, Tie-Yan Liu, Wei-Ying Ma

**Click to Read Paper**

FilterReg: Robust and Efficient Probabilistic Point-Set Registration using Gaussian Filter and Twist Parameterization

Nov 26, 2018

Wei Gao, Russ Tedrake

Nov 26, 2018

Wei Gao, Russ Tedrake

* The video demo and source code are on https://sites.google.com/view/filterreg/home

**Click to Read Paper**

Optical Mapping Near-eye Three-dimensional Display with Correct Focus Cues

May 24, 2017

Wei Cui, Liang Gao

May 24, 2017

Wei Cui, Liang Gao

* 5 pages, 6 figures, 2 tables, short article for Optics Letters

**Click to Read Paper**

AUC (area under ROC curve) is an important evaluation criterion, which has been popularly used in many learning tasks such as class-imbalance learning, cost-sensitive learning, learning to rank, etc. Many learning approaches try to optimize AUC, while owing to the non-convexity and discontinuousness of AUC, almost all approaches work with surrogate loss functions. Thus, the consistency of AUC is crucial; however, it has been almost untouched before. In this paper, we provide a sufficient condition for the asymptotic consistency of learning approaches based on surrogate loss functions. Based on this result, we prove that exponential loss and logistic loss are consistent with AUC, but hinge loss is inconsistent. Then, we derive the $q$-norm hinge loss and general hinge loss that are consistent with AUC. We also derive the consistent bounds for exponential loss and logistic loss, and obtain the consistent bounds for many surrogate loss functions under the non-noise setting. Further, we disclose an equivalence between the exponential surrogate loss of AUC and exponential surrogate loss of accuracy, and one straightforward consequence of such finding is that AdaBoost and RankBoost are equivalent.

**Click to Read Paper**
Great successes of deep neural networks have been witnessed in various real applications. Many algorithmic and implementation techniques have been developed, however, theoretical understanding of many aspects of deep neural networks is far from clear. A particular interesting issue is the usefulness of dropout, which was motivated from the intuition of preventing complex co-adaptation of feature detectors. In this paper, we study the Rademacher complexity of different types of dropout, and our theoretical results disclose that for shallow neural networks (with one or none hidden layer) dropout is able to reduce the Rademacher complexity in polynomial, whereas for deep neural networks it can amazingly lead to an exponential reduction of the Rademacher complexity.

* 20 pagea

* 20 pagea

**Click to Read Paper*** Artificial Intelligence 203:1-18 2013

* 35 pages

**Click to Read Paper**

Structure Learning of Deep Neural Networks with Q-Learning

Oct 31, 2018

Guoqiang Zhong, Wencong Jiao, Wei Gao

Oct 31, 2018

Guoqiang Zhong, Wencong Jiao, Wei Gao

**Click to Read Paper**

Reinforcement Learning Based Argument Component Detection

Feb 21, 2017

Yang Gao, Hao Wang, Chen Zhang, Wei Wang

Feb 21, 2017

Yang Gao, Hao Wang, Chen Zhang, Wei Wang

**Click to Read Paper**

On the Resistance of Nearest Neighbor to Random Noisy Labels

Sep 13, 2018

Wei Gao, Bin-Bin Yang, Zhi-Hua Zhou

Nearest neighbor has always been one of the most appealing non-parametric approaches in machine learning, pattern recognition, computer vision, etc. Previous empirical studies partly shows that nearest neighbor is resistant to noise, yet there is a lack of deep analysis. This work presents the finite-sample and distribution-dependent bounds on the consistency of nearest neighbor in the random noise setting. The theoretical results show that, for asymmetric noises, k-nearest neighbor is robust enough to classify most data correctly, except for a handful of examples, whose labels are totally misled by random noises. For symmetric noises, however, k-nearest neighbor achieves the same consistent rate as that of noise-free setting, which verifies the resistance of k-nearest neighbor to random noisy labels. Motivated by the theoretical analysis, we propose the Robust k-Nearest Neighbor (RkNN) approach to deal with noisy labels. The basic idea is to make unilateral corrections to examples, whose labels are totally misled by random noises, and classify the others directly by utilizing the robustness of k-nearest neighbor. We verify the effectiveness of the proposed algorithm both theoretically and empirically.
Sep 13, 2018

Wei Gao, Bin-Bin Yang, Zhi-Hua Zhou

* 35 pages

**Click to Read Paper**

Generative Adversarial Networks with Decoder-Encoder Output Noise

Jul 11, 2018

Guoqiang Zhong, Wei Gao, Yongbin Liu, Youzhao Yang

Jul 11, 2018

Guoqiang Zhong, Wei Gao, Yongbin Liu, Youzhao Yang

**Click to Read Paper**

Person Transfer GAN to Bridge Domain Gap for Person Re-Identification

Jun 25, 2018

Longhui Wei, Shiliang Zhang, Wen Gao, Qi Tian

Jun 25, 2018

Longhui Wei, Shiliang Zhang, Wen Gao, Qi Tian

* 10 pages, 9 figures; accepted in CVPR 2018

**Click to Read Paper**

Modern Physiognomy: An Investigation on Predicting Personality Traits and Intelligence from the Human Face

Apr 26, 2016

Rizhen Qin, Wei Gao, Huarong Xu, Zhanyi Hu

Apr 26, 2016

Rizhen Qin, Wei Gao, Huarong Xu, Zhanyi Hu

**Click to Read Paper**

Online dictionary learning for kernel LMS. Analysis and forward-backward splitting algorithm

Jun 22, 2013

Wei Gao, Jie Chen, CÃ©dric Richard, Jianguo Huang

Jun 22, 2013

Wei Gao, Jie Chen, CÃ©dric Richard, Jianguo Huang

**Click to Read Paper**

**Click to Read Paper**

Fast single image super-resolution based on sigmoid transformation

Nov 05, 2017

Longguang Wang, Zaiping Lin, Jinyan Gao, Xinpu Deng, Wei An

Nov 05, 2017

Longguang Wang, Zaiping Lin, Jinyan Gao, Xinpu Deng, Wei An

**Click to Read Paper**

Effects of the optimisation of the margin distribution on generalisation in deep architectures

Apr 19, 2017

Lech Szymanski, Brendan McCane, Wei Gao, Zhi-Hua Zhou

Apr 19, 2017

Lech Szymanski, Brendan McCane, Wei Gao, Zhi-Hua Zhou

**Click to Read Paper**

* Proceeding of 30th International Conference on Machine Learning

**Click to Read Paper**

Generating Multiple Diverse Responses for Short-Text Conversation

Nov 29, 2018

Jun Gao, Wei Bi, Xiaojiang Liu, Junhui Li, Shuming Shi

Nov 29, 2018

Jun Gao, Wei Bi, Xiaojiang Liu, Junhui Li, Shuming Shi

**Click to Read Paper**

Modeling Attention Flow on Graphs

Nov 01, 2018

Xiaoran Xu, Songpeng Zu, Chengliang Gao, Yuan Zhang, Wei Feng

Nov 01, 2018

Xiaoran Xu, Songpeng Zu, Chengliang Gao, Yuan Zhang, Wei Feng

**Click to Read Paper**

Local Patch Encoding-Based Method for Single Image Super-Resolution

Jul 04, 2018

Yang Zhao, Ronggang Wang, Wei Jia, Jianchao Yang, Wenmin Wang, Wen Gao

Jul 04, 2018

Yang Zhao, Ronggang Wang, Wei Jia, Jianchao Yang, Wenmin Wang, Wen Gao

* Y. Zhao, R. Wang, W. Jia, J. Yang, W. Wang , W. Gao, Local patch encoding-based method for single image super-resolution, Information Sciences, vol.433, pp.292-305, 2018

* 20 pages, 8 figures

**Click to Read Paper**