Defending against Whitebox Adversarial Attacks via Randomized Discretization

Mar 25, 2019

Yuchen Zhang, Percy Liang

Adversarial perturbations dramatically decrease the accuracy of state-of-the-art image classifiers. In this paper, we propose and analyze a simple and computationally efficient defense strategy: inject random Gaussian noise, discretize each pixel, and then feed the result into any pre-trained classifier. Theoretically, we show that our randomized discretization strategy reduces the KL divergence between original and adversarial inputs, leading to a lower bound on the classification accuracy of any classifier against any (potentially whitebox) $\ell_\infty$-bounded adversarial attack. Empirically, we evaluate our defense on adversarial examples generated by a strong iterative PGD attack. On ImageNet, our defense is more robust than adversarially-trained networks and the winning defenses of the NIPS 2017 Adversarial Attacks & Defenses competition.
Mar 25, 2019

Yuchen Zhang, Percy Liang

* In proceedings of the 22nd International Conference on Artificial Intelligence and Statistics

**Click to Read Paper and Get Code**

A Non-asymptotic, Sharp, and User-friendly Reverse Chernoff-Cramèr Bound

Oct 21, 2018

Anru Zhang, Yuchen Zhou

The Chernoff-Cram\`er bound is a widely used technique to analyze the upper tail bound of random variable based on its moment generating function. By elementary proofs, we develop a user-friendly reverse Chernoff-Cram\`er bound that yields non-asymptotic lower tail bounds for generic random variables. The new reverse Chernoff-Cram\`er bound is used to derive a series of results, including the sharp lower tail bounds for the sum of independent sub-Gaussian and sub-exponential random variables, which matches the classic Hoefflding-type and Bernstein-type concentration inequalities, respectively. We also provide non-asymptotic matching upper and lower tail bounds for a suite of distributions, including gamma, beta, (regular, weighted, and noncentral) chi-squared, binomial, Poisson, Irwin-Hall, etc. We apply the result to develop matching upper and lower bounds for extreme value expectation of the sum of independent sub-Gaussian and sub-exponential random variables. A statistical application of sparse signal identification is finally studied.
Oct 21, 2018

Anru Zhang, Yuchen Zhou

**Click to Read Paper and Get Code**

Neural Ranking Models for Temporal Dependency Structure Parsing

Sep 02, 2018

Yuchen Zhang, Nianwen Xue

We design and build the first neural temporal dependency parser. It utilizes a neural ranking model with minimal feature engineering, and parses time expressions and events in a text into a temporal dependency tree structure. We evaluate our parser on two domains: news reports and narrative stories. In a parsing-only evaluation setup where gold time expressions and events are provided, our parser reaches 0.81 and 0.70 f-score on unlabeled and labeled parsing respectively, a result that is very competitive against alternative approaches. In an end-to-end evaluation setup where time expressions and events are automatically recognized, our parser beats two strong baselines on both data domains. Our experimental results and discussions shed light on the nature of temporal dependency structures in different domains and provide insights that we believe will be valuable to future research in this area.
Sep 02, 2018

Yuchen Zhang, Nianwen Xue

* 11 pages, 2 figures, 7 tables, to appear at EMNLP 2018, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP). 2018

**Click to Read Paper and Get Code**

Temporal relations between events and time expressions in a document are often modeled in an unstructured manner where relations between individual pairs of time expressions and events are considered in isolation. This often results in inconsistent and incomplete annotation and computational modeling. We propose a novel annotation approach where events and time expressions in a document form a dependency tree in which each dependency relation corresponds to an instance of temporal anaphora where the antecedent is the parent and the anaphor is the child. We annotate a corpus of 235 documents using this approach in the two genres of news and narratives, with 48 documents doubly annotated. We report a stable and high inter-annotator agreement on the doubly annotated subset, validating our approach, and perform a quantitative comparison between the two genres of the entire corpus. We make this corpus publicly available.

* Yuchen Zhang and Nianwen Xue. 2018. Structured Interpretation of Temporal Relations. In Proceedings of the 11th Language Resources and Evaluation Conference (LREC-2018), Miyazaki, Japan

* 9 pages, 2 figures, 8 tables, LREC-2018

* Yuchen Zhang and Nianwen Xue. 2018. Structured Interpretation of Temporal Relations. In Proceedings of the 11th Language Resources and Evaluation Conference (LREC-2018), Miyazaki, Japan

* 9 pages, 2 figures, 8 tables, LREC-2018

**Click to Read Paper and Get Code**
Splash: User-friendly Programming Interface for Parallelizing Stochastic Algorithms

Sep 23, 2015

Yuchen Zhang, Michael I. Jordan

Stochastic algorithms are efficient approaches to solving machine learning and optimization problems. In this paper, we propose a general framework called Splash for parallelizing stochastic algorithms on multi-node distributed systems. Splash consists of a programming interface and an execution engine. Using the programming interface, the user develops sequential stochastic algorithms without concerning any detail about distributed computing. The algorithm is then automatically parallelized by a communication-efficient execution engine. We provide theoretical justifications on the optimal rate of convergence for parallelizing stochastic gradient descent. Splash is built on top of Apache Spark. The real-data experiments on logistic regression, collaborative filtering and topic modeling verify that Splash yields order-of-magnitude speedup over single-thread stochastic algorithms and over state-of-the-art implementations on Spark.
Sep 23, 2015

Yuchen Zhang, Michael I. Jordan

* redo experiments to learn bigger models; compare Splash with state-of-the-art implementations on Spark

**Click to Read Paper and Get Code**

Stochastic Primal-Dual Coordinate Method for Regularized Empirical Risk Minimization

Sep 09, 2015

Yuchen Zhang, Lin Xiao

We consider a generic convex optimization problem associated with regularized empirical risk minimization of linear predictors. The problem structure allows us to reformulate it as a convex-concave saddle point problem. We propose a stochastic primal-dual coordinate (SPDC) method, which alternates between maximizing over a randomly chosen dual variable and minimizing over the primal variable. An extrapolation step on the primal variable is performed to obtain accelerated convergence rate. We also develop a mini-batch version of the SPDC method which facilitates parallel computing, and an extension with weighted sampling probabilities on the dual variables, which has a better complexity than uniform sampling on unnormalized data. Both theoretically and empirically, we show that the SPDC method has comparable or better performance than several state-of-the-art optimization methods.
Sep 09, 2015

Yuchen Zhang, Lin Xiao

**Click to Read Paper and Get Code**

Communication-Efficient Distributed Optimization of Self-Concordant Empirical Loss

Jan 01, 2015

Yuchen Zhang, Lin Xiao

We consider distributed convex optimization problems originated from sample average approximation of stochastic optimization, or empirical risk minimization in machine learning. We assume that each machine in the distributed computing system has access to a local empirical loss function, constructed with i.i.d. data sampled from a common distribution. We propose a communication-efficient distributed algorithm to minimize the overall empirical loss, which is the average of the local empirical losses. The algorithm is based on an inexact damped Newton method, where the inexact Newton steps are computed by a distributed preconditioned conjugate gradient method. We analyze its iteration complexity and communication efficiency for minimizing self-concordant empirical loss functions, and discuss the results for distributed ridge regression, logistic regression and binary classification with a smoothed hinge loss. In a standard setting for supervised learning, the required number of communication rounds of the algorithm does not increase with the sample size, and only grows slowly with the number of machines.
Jan 01, 2015

Yuchen Zhang, Lin Xiao

**Click to Read Paper and Get Code**

A Hitting Time Analysis of Stochastic Gradient Langevin Dynamics

Apr 09, 2018

Yuchen Zhang, Percy Liang, Moses Charikar

We study the Stochastic Gradient Langevin Dynamics (SGLD) algorithm for non-convex optimization. The algorithm performs stochastic gradient descent, where in each step it injects appropriately scaled Gaussian noise to the update. We analyze the algorithm's hitting time to an arbitrary subset of the parameter space. Two results follow from our general theory: First, we prove that for empirical risk minimization, if the empirical risk is point-wise close to the (smooth) population risk, then the algorithm achieves an approximate local minimum of the population risk in polynomial time, escaping suboptimal local minima that only exist in the empirical risk. Second, we show that SGLD improves on one of the best known learnability results for learning linear classifiers under the zero-one loss.
Apr 09, 2018

Yuchen Zhang, Percy Liang, Moses Charikar

* Correct two mistakes in the proofs of Lemma 3 and Lemma 5

**Click to Read Paper and Get Code**

Macro Grammars and Holistic Triggering for Efficient Semantic Parsing

Aug 31, 2017

Yuchen Zhang, Panupong Pasupat, Percy Liang

To learn a semantic parser from denotations, a learning algorithm must search over a combinatorially large space of logical forms for ones consistent with the annotated denotations. We propose a new online learning algorithm that searches faster as training progresses. The two key ideas are using macro grammars to cache the abstract patterns of useful logical forms found thus far, and holistic triggering to efficiently retrieve the most relevant patterns based on sentence similarity. On the WikiTableQuestions dataset, we first expand the search space of an existing model to improve the state-of-the-art accuracy from 38.7% to 42.7%, and then use macro grammars and holistic triggering to achieve an 11x speedup and an accuracy of 43.7%.
Aug 31, 2017

Yuchen Zhang, Panupong Pasupat, Percy Liang

* EMNLP 2017

**Click to Read Paper and Get Code**

Convexified Convolutional Neural Networks

Sep 04, 2016

Yuchen Zhang, Percy Liang, Martin J. Wainwright

We describe the class of convexified convolutional neural networks (CCNNs), which capture the parameter sharing of convolutional neural networks in a convex manner. By representing the nonlinear convolutional filters as vectors in a reproducing kernel Hilbert space, the CNN parameters can be represented as a low-rank matrix, which can be relaxed to obtain a convex optimization problem. For learning two-layer convolutional neural networks, we prove that the generalization error obtained by a convexified CNN converges to that of the best possible CNN. For learning deeper networks, we train CCNNs in a layer-wise manner. Empirically, CCNNs achieve performance competitive with CNNs trained by backpropagation, SVMs, fully-connected neural networks, stacked denoising auto-encoders, and other baseline methods.
Sep 04, 2016

Yuchen Zhang, Percy Liang, Martin J. Wainwright

* 29 pages

**Click to Read Paper and Get Code**

Optimal prediction for sparse linear models? Lower bounds for coordinate-separable M-estimators

Nov 30, 2015

Yuchen Zhang, Martin J. Wainwright, Michael I. Jordan

For the problem of high-dimensional sparse linear regression, it is known that an $\ell_0$-based estimator can achieve a $1/n$ "fast" rate on the prediction error without any conditions on the design matrix, whereas in absence of restrictive conditions on the design matrix, popular polynomial-time methods only guarantee the $1/\sqrt{n}$ "slow" rate. In this paper, we show that the slow rate is intrinsic to a broad class of M-estimators. In particular, for estimators based on minimizing a least-squares cost function together with a (possibly non-convex) coordinate-wise separable regularizer, there is always a "bad" local optimum such that the associated prediction error is lower bounded by a constant multiple of $1/\sqrt{n}$. For convex regularizers, this lower bound applies to all global optima. The theory is applicable to many popular estimators, including convex $\ell_1$-based methods as well as M-estimators based on nonconvex regularizers, including the SCAD penalty or the MCP regularizer. In addition, for a broad class of nonconvex regularizers, we show that the bad local optima are very common, in that a broad class of local minimization algorithms with random initialization will typically converge to a bad solution.
Nov 30, 2015

Yuchen Zhang, Martin J. Wainwright, Michael I. Jordan

* Add more coverage on related work; add a new lower bound for design matrices satisfying the restricted eigenvalue condition

**Click to Read Paper and Get Code**

$\ell_1$-regularized Neural Networks are Improperly Learnable in Polynomial Time

Oct 13, 2015

Yuchen Zhang, Jason D. Lee, Michael I. Jordan

We study the improper learning of multi-layer neural networks. Suppose that the neural network to be learned has $k$ hidden layers and that the $\ell_1$-norm of the incoming weights of any neuron is bounded by $L$. We present a kernel-based method, such that with probability at least $1 - \delta$, it learns a predictor whose generalization error is at most $\epsilon$ worse than that of the neural network. The sample complexity and the time complexity of the presented method are polynomial in the input dimension and in $(1/\epsilon,\log(1/\delta),F(k,L))$, where $F(k,L)$ is a function depending on $(k,L)$ and on the activation function, independent of the number of neurons. The algorithm applies to both sigmoid-like activation functions and ReLU-like activation functions. It implies that any sufficiently sparse neural network is learnable in polynomial time.
Oct 13, 2015

Yuchen Zhang, Jason D. Lee, Michael I. Jordan

* 16 pages

**Click to Read Paper and Get Code**

Distributed Estimation of Generalized Matrix Rank: Efficient Algorithms and Lower Bounds

Feb 06, 2015

Yuchen Zhang, Martin J. Wainwright, Michael I. Jordan

We study the following generalized matrix rank estimation problem: given an $n \times n$ matrix and a constant $c \geq 0$, estimate the number of eigenvalues that are greater than $c$. In the distributed setting, the matrix of interest is the sum of $m$ matrices held by separate machines. We show that any deterministic algorithm solving this problem must communicate $\Omega(n^2)$ bits, which is order-equivalent to transmitting the whole matrix. In contrast, we propose a randomized algorithm that communicates only $\widetilde O(n)$ bits. The upper bound is matched by an $\Omega(n)$ lower bound on the randomized communication complexity. We demonstrate the practical effectiveness of the proposed algorithm with some numerical experiments.
Feb 06, 2015

Yuchen Zhang, Martin J. Wainwright, Michael I. Jordan

* 23 pages, 5 figures

**Click to Read Paper and Get Code**

Comunication-Efficient Algorithms for Statistical Optimization

Oct 11, 2013

Yuchen Zhang, John C. Duchi, Martin Wainwright

We analyze two communication-efficient algorithms for distributed statistical optimization on large-scale data sets. The first algorithm is a standard averaging method that distributes the $N$ data samples evenly to $\nummac$ machines, performs separate minimization on each subset, and then averages the estimates. We provide a sharp analysis of this average mixture algorithm, showing that under a reasonable set of conditions, the combined parameter achieves mean-squared error that decays as $\order(N^{-1}+(N/m)^{-2})$. Whenever $m \le \sqrt{N}$, this guarantee matches the best possible rate achievable by a centralized algorithm having access to all $\totalnumobs$ samples. The second algorithm is a novel method, based on an appropriate form of bootstrap subsampling. Requiring only a single round of communication, it has mean-squared error that decays as $\order(N^{-1} + (N/m)^{-3})$, and so is more robust to the amount of parallelization. In addition, we show that a stochastic gradient-based method attains mean-squared error decaying as $O(N^{-1} + (N/ m)^{-3/2})$, easing computation at the expense of penalties in the rate of convergence. We also provide experimental evaluation of our methods, investigating their performance both on simulated data and on a large-scale regression problem from the internet search domain. In particular, we show that our methods can be used to efficiently solve an advertisement prediction problem from the Chinese SoSo Search Engine, which involves logistic regression with $N \approx 2.4 \times 10^8$ samples and $d \approx 740,000$ covariates.
Oct 11, 2013

Yuchen Zhang, John C. Duchi, Martin Wainwright

* 44 pages, to appear in Journal of Machine Learning Research (JMLR)

**Click to Read Paper and Get Code**

Bridging Theory and Algorithm for Domain Adaptation

Apr 11, 2019

Yuchen Zhang, Tianle Liu, Mingsheng Long, Michael I. Jordan

This paper addresses the problem of unsupervised domain adaption from theoretical and algorithmic perspectives. Existing domain adaptation theories naturally imply minimax optimization algorithms, which connect well with the adversarial-learning based domain adaptation methods. However, several disconnections still form the gap between theory and algorithm. We extend previous theories (Ben-David et al., 2010; Mansour et al., 2009c) to multiclass classification in domain adaptation, where classifiers based on scoring functions and margin loss are standard algorithmic choices. We introduce a novel measurement, margin disparity discrepancy, that is tailored both to distribution comparison with asymmetric margin loss, and to minimax optimization for easier training. Using this discrepancy, we derive new generalization bounds in terms of Rademacher complexity. Our theory can be seamlessly transformed into an adversarial learning algorithm for domain adaptation, successfully bridging the gap between theory and algorithm. A series of empirical studies show that our algorithm achieves the state-of-the-art accuracies on challenging domain adaptation tasks.
Apr 11, 2019

Yuchen Zhang, Tianle Liu, Mingsheng Long, Michael I. Jordan

**Click to Read Paper and Get Code**

Spectral Methods meet EM: A Provably Optimal Algorithm for Crowdsourcing

Nov 01, 2014

Yuchen Zhang, Xi Chen, Dengyong Zhou, Michael I. Jordan

Crowdsourcing is a popular paradigm for effectively collecting labels at low cost. The Dawid-Skene estimator has been widely used for inferring the true labels from the noisy labels provided by non-expert crowdsourcing workers. However, since the estimator maximizes a non-convex log-likelihood function, it is hard to theoretically justify its performance. In this paper, we propose a two-stage efficient algorithm for multi-class crowd labeling problems. The first stage uses the spectral method to obtain an initial estimate of parameters. Then the second stage refines the estimation by optimizing the objective function of the Dawid-Skene estimator via the EM algorithm. We show that our algorithm achieves the optimal convergence rate up to a logarithmic factor. We conduct extensive experiments on synthetic and real datasets. Experimental results demonstrate that the proposed algorithm is comparable to the most accurate empirical approach, while outperforming several other recently proposed methods.
Nov 01, 2014

Yuchen Zhang, Xi Chen, Dengyong Zhou, Michael I. Jordan

**Click to Read Paper and Get Code**

Divide and Conquer Kernel Ridge Regression: A Distributed Algorithm with Minimax Optimal Rates

Apr 29, 2014

Yuchen Zhang, John C. Duchi, Martin J. Wainwright

We establish optimal convergence rates for a decomposition-based scalable approach to kernel ridge regression. The method is simple to describe: it randomly partitions a dataset of size N into m subsets of equal size, computes an independent kernel ridge regression estimator for each subset, then averages the local solutions into a global predictor. This partitioning leads to a substantial reduction in computation time versus the standard approach of performing kernel ridge regression on all N samples. Our two main theorems establish that despite the computational speed-up, statistical optimality is retained: as long as m is not too large, the partition-based estimator achieves the statistical minimax rate over all estimators using the set of N samples. As concrete examples, our theory guarantees that the number of processors m may grow nearly linearly for finite-rank kernels and Gaussian kernels and polynomially in N for Sobolev spaces, which in turn allows for substantial reductions in computational cost. We conclude with experiments on both simulated data and a music-prediction task that complement our theoretical results, exhibiting the computational and statistical benefits of our approach.
Apr 29, 2014

Yuchen Zhang, John C. Duchi, Martin J. Wainwright

**Click to Read Paper and Get Code**

Big-Data Clustering: K-Means or K-Indicators?

Jun 03, 2019

Feiyu Chen, Yuchen Yang, Liwei Xu, Taiping Zhang, Yin Zhang

The K-means algorithm is arguably the most popular data clustering method, commonly applied to processed datasets in some "feature spaces", as is in spectral clustering. Highly sensitive to initializations, however, K-means encounters a scalability bottleneck with respect to the number of clusters K as this number grows in big data applications. In this work, we promote a closely related model called K-indicators model and construct an efficient, semi-convex-relaxation algorithm that requires no randomized initializations. We present extensive empirical results to show advantages of the new algorithm when K is large. In particular, using the new algorithm to start the K-means algorithm, without any replication, can significantly outperform the standard K-means with a large number of currently state-of-the-art random replications.
Jun 03, 2019

Feiyu Chen, Yuchen Yang, Liwei Xu, Taiping Zhang, Yin Zhang

**Click to Read Paper and Get Code**

Language-Independent Representor for Neural Machine Translation

Nov 01, 2018

Long Zhou, Yuchen Liu, Jiajun Zhang, Chengqing Zong, Guoping Huang

Current Neural Machine Translation (NMT) employs a language-specific encoder to represent the source sentence and adopts a language-specific decoder to generate target translation. This language-dependent design leads to large-scale network parameters and makes the duality of the parallel data underutilized. To address the problem, we propose in this paper a language-independent representor to replace the encoder and decoder by using weight sharing. This shared representor can not only reduce large portion of network parameters, but also facilitate us to fully explore the language duality by jointly training source-to-target, target-to-source, left-to-right and right-to-left translations within a multi-task learning framework. Experiments show that our proposed framework can obtain significant improvements over conventional NMT models on resource-rich and low-resource translation tasks with only a quarter of parameters.
Nov 01, 2018

Long Zhou, Yuchen Liu, Jiajun Zhang, Chengqing Zong, Guoping Huang

**Click to Read Paper and Get Code**

Learning Halfspaces and Neural Networks with Random Initialization

Nov 25, 2015

Yuchen Zhang, Jason D. Lee, Martin J. Wainwright, Michael I. Jordan

We study non-convex empirical risk minimization for learning halfspaces and neural networks. For loss functions that are $L$-Lipschitz continuous, we present algorithms to learn halfspaces and multi-layer neural networks that achieve arbitrarily small excess risk $\epsilon>0$. The time complexity is polynomial in the input dimension $d$ and the sample size $n$, but exponential in the quantity $(L/\epsilon^2)\log(L/\epsilon)$. These algorithms run multiple rounds of random initialization followed by arbitrary optimization steps. We further show that if the data is separable by some neural network with constant margin $\gamma>0$, then there is a polynomial-time algorithm for learning a neural network that separates the training data with margin $\Omega(\gamma)$. As a consequence, the algorithm achieves arbitrary generalization error $\epsilon>0$ with ${\rm poly}(d,1/\epsilon)$ sample and time complexity. We establish the same learnability result when the labels are randomly flipped with probability $\eta<1/2$.
Nov 25, 2015

Yuchen Zhang, Jason D. Lee, Martin J. Wainwright, Michael I. Jordan

* 31 pages

**Click to Read Paper and Get Code**