**Click to Read Paper and Get Code**

This paper studies robust regression in the settings of Huber's $\epsilon$-contamination models. We consider estimators that are maximizers of multivariate regression depth functions. These estimators are shown to achieve minimax rates in the settings of $\epsilon$-contamination models for various regression problems including nonparametric regression, sparse linear regression, reduced rank regression, etc. We also discuss a general notion of depth function for linear operators that has potential applications in robust functional linear regression.

**Click to Read Paper and Get Code**
Generative Adversarial Nets for Robust Scatter Estimation: A Proper Scoring Rule Perspective

Mar 05, 2019

Chao Gao, Yuan Yao, Weizhi Zhu

Mar 05, 2019

Chao Gao, Yuan Yao, Weizhi Zhu

**Click to Read Paper and Get Code**

Minimax Rates in Network Analysis: Graphon Estimation, Community Detection and Hypothesis Testing

Nov 14, 2018

Chao Gao, Zongming Ma

Nov 14, 2018

Chao Gao, Zongming Ma

**Click to Read Paper and Get Code**

**Click to Read Paper and Get Code**

Semi-automated Signal Surveying Using Smartphones and Floorplans

Nov 17, 2017

Chao Gao, Robert Harle

Nov 17, 2017

Chao Gao, Robert Harle

**Click to Read Paper and Get Code**

Minimax Optimal Convergence Rates for Estimating Ground Truth from Crowdsourced Labels

May 30, 2016

Chao Gao, Dengyong Zhou

May 30, 2016

Chao Gao, Dengyong Zhou

**Click to Read Paper and Get Code**

N-fold Superposition: Improving Neural Networks by Reducing the Noise in Feature Maps

May 03, 2018

Yang Liu, Qiang Qu, Chao Gao

May 03, 2018

Yang Liu, Qiang Qu, Chao Gao

* 7 pages, 5 figures, submitted to ICALIP 2018

**Click to Read Paper and Get Code**

In many machine learning applications, crowdsourcing has become the primary means for label collection. In this paper, we study the optimal error rate for aggregating labels provided by a set of non-expert workers. Under the classic Dawid-Skene model, we establish matching upper and lower bounds with an exact exponent $mI(\pi)$ in which $m$ is the number of workers and $I(\pi)$ the average Chernoff information that characterizes the workers' collective ability. Such an exact characterization of the error exponent allows us to state a precise sample size requirement $m>\frac{1}{I(\pi)}\log\frac{1}{\epsilon}$ in order to achieve an $\epsilon$ misclassification error. In addition, our results imply the optimality of various EM algorithms for crowdsourcing initialized by consistent estimators.

* To appear in the Proceedings of the 33rd International Conference on Machine Learning, New York, NY, USA, 2016

* To appear in the Proceedings of the 33rd International Conference on Machine Learning, New York, NY, USA, 2016

**Click to Read Paper and Get Code**
Posterior Contraction Rates of the Phylogenetic Indian Buffet Processes

May 19, 2015

Mengjie Chen, Chao Gao, Hongyu Zhao

May 19, 2015

Mengjie Chen, Chao Gao, Hongyu Zhao

**Click to Read Paper and Get Code**

Continual Match Based Training in Pommerman: Technical Report

Dec 18, 2018

Peng Peng, Liang Pang, Yufeng Yuan, Chao Gao

Dec 18, 2018

Peng Peng, Liang Pang, Yufeng Yuan, Chao Gao

* 8 pages, 7 figures

**Click to Read Paper and Get Code**

Relations among Some Low Rank Subspace Recovery Models

Dec 06, 2014

Hongyang Zhang, Zhouchen Lin, Chao Zhang, Junbin Gao

Recovering intrinsic low dimensional subspaces from data distributed on them is a key preprocessing step to many applications. In recent years, there has been a lot of work that models subspace recovery as low rank minimization problems. We find that some representative models, such as Robust Principal Component Analysis (R-PCA), Robust Low Rank Representation (R-LRR), and Robust Latent Low Rank Representation (R-LatLRR), are actually deeply connected. More specifically, we discover that once a solution to one of the models is obtained, we can obtain the solutions to other models in closed-form formulations. Since R-PCA is the simplest, our discovery makes it the center of low rank subspace recovery models. Our work has two important implications. First, R-PCA has a solid theoretical foundation. Under certain conditions, we could find better solutions to these low rank models at overwhelming probabilities, although these models are non-convex. Second, we can obtain significantly faster algorithms for these models by solving R-PCA first. The computation cost can be further cut by applying low complexity randomized algorithms, e.g., our novel $\ell_{2,1}$ filtering algorithm, to R-PCA. Experiments verify the advantages of our algorithms over other state-of-the-art ones that are based on the alternating direction method.
Dec 06, 2014

Hongyang Zhang, Zhouchen Lin, Chao Zhang, Junbin Gao

* Submitted to Neural Computation

**Click to Read Paper and Get Code**

The Importance of Norm Regularization in Linear Graph Embedding: Theoretical Analysis and Empirical Demonstration

Oct 12, 2018

Yihan Gao, Chao Zhang, Jian Peng, Aditya Parameswaran

Oct 12, 2018

Yihan Gao, Chao Zhang, Jian Peng, Aditya Parameswaran

**Click to Read Paper and Get Code**

Robust Estimation and Generative Adversarial Nets

Oct 07, 2018

Chao Gao, Jiyi Liu, Yuan Yao, Weizhi Zhu

Oct 07, 2018

Chao Gao, Jiyi Liu, Yuan Yao, Weizhi Zhu

**Click to Read Paper and Get Code**

Optimal Estimation and Completion of Matrices with Biclustering Structures

Oct 22, 2018

Chao Gao, Yu Lu, Zongming Ma, Harrison H. Zhou

Oct 22, 2018

Chao Gao, Yu Lu, Zongming Ma, Harrison H. Zhou

**Click to Read Paper and Get Code**

Stochastic Canonical Correlation Analysis

Feb 21, 2017

Chao Gao, Dan Garber, Nathan Srebro, Jialei Wang, Weiran Wang

We tightly analyze the sample complexity of CCA, provide a learning algorithm that achieves optimal statistical performance in time linear in the required number of samples (up to log factors), as well as a streaming algorithm with similar guarantees.
Feb 21, 2017

Chao Gao, Dan Garber, Nathan Srebro, Jialei Wang, Weiran Wang

**Click to Read Paper and Get Code**

Sparse CCA via Precision Adjusted Iterative Thresholding

Nov 24, 2013

Mengjie Chen, Chao Gao, Zhao Ren, Harrison H. Zhou

Nov 24, 2013

Mengjie Chen, Chao Gao, Zhao Ren, Harrison H. Zhou

**Click to Read Paper and Get Code**

Natural-Logarithm-Rectified Activation Function in Convolutional Neural Networks

Aug 25, 2019

Yang Liu, Jianpeng Zhang, Chao Gao, Jinghua Qu, Lixin Ji

Aug 25, 2019

Yang Liu, Jianpeng Zhang, Chao Gao, Jinghua Qu, Lixin Ji

**Click to Read Paper and Get Code**

A Sensitivity Analysis of Attention-Gated Convolutional Neural Networks for Sentence Classification

Aug 25, 2019

Yang Liu, Jianpeng Zhang, Chao Gao, Jinghua Qu, Lixin Ji

Aug 25, 2019

Yang Liu, Jianpeng Zhang, Chao Gao, Jinghua Qu, Lixin Ji

**Click to Read Paper and Get Code**