Models, code, and papers for "Aryan Mokhtari":

Adaptive Newton Method for Empirical Risk Minimization to Statistical Accuracy

May 24, 2016
Aryan Mokhtari, Alejandro Ribeiro

We consider empirical risk minimization for large-scale datasets. We introduce Ada Newton as an adaptive algorithm that uses Newton's method with adaptive sample sizes. The main idea of Ada Newton is to increase the size of the training set by a factor larger than one in a way that the minimization variable for the current training set is in the local neighborhood of the optimal argument of the next training set. This allows to exploit the quadratic convergence property of Newton's method and reach the statistical accuracy of each training set with only one iteration of Newton's method. We show theoretically and empirically that Ada Newton can double the size of the training set in each iteration to achieve the statistical accuracy of the full training set with about two passes over the dataset.


  Click for Model/Code and Paper
Global Convergence of Online Limited Memory BFGS

Sep 06, 2014
Aryan Mokhtari, Alejandro Ribeiro

Global convergence of an online (stochastic) limited memory version of the Broyden-Fletcher- Goldfarb-Shanno (BFGS) quasi-Newton method for solving optimization problems with stochastic objectives that arise in large scale machine learning is established. Lower and upper bounds on the Hessian eigenvalues of the sample functions are shown to suffice to guarantee that the curvature approximation matrices have bounded determinants and traces, which, in turn, permits establishing convergence to optimal arguments with probability 1. Numerical experiments on support vector machines with synthetic data showcase reductions in convergence time relative to stochastic gradient descent algorithms as well as reductions in storage and computation relative to other online quasi-Newton methods. Experimental evaluation on a search engine advertising problem corroborates that these advantages also manifest in practical applications.

* 37 pages 

  Click for Model/Code and Paper
A Quasi-Newton Method for Large Scale Support Vector Machines

Feb 20, 2014
Aryan Mokhtari, Alejandro Ribeiro

This paper adapts a recently developed regularized stochastic version of the Broyden, Fletcher, Goldfarb, and Shanno (BFGS) quasi-Newton method for the solution of support vector machine classification problems. The proposed method is shown to converge almost surely to the optimal classifier at a rate that is linear in expectation. Numerical results show that the proposed method exhibits a convergence rate that degrades smoothly with the dimensionality of the feature vectors.

* 5 pages, To appear in International Conference on Acoustics, Speech, and Signal Processing (ICASSP) 2014 

  Click for Model/Code and Paper
RES: Regularized Stochastic BFGS Algorithm

Jan 29, 2014
Aryan Mokhtari, Alejandro Ribeiro

RES, a regularized stochastic version of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton method is proposed to solve convex optimization problems with stochastic objectives. The use of stochastic gradient descent algorithms is widespread, but the number of iterations required to approximate optimal arguments can be prohibitive in high dimensional problems. Application of second order methods, on the other hand, is impracticable because computation of objective function Hessian inverses incurs excessive computational cost. BFGS modifies gradient descent by introducing a Hessian approximation matrix computed from finite gradient differences. RES utilizes stochastic gradients in lieu of deterministic gradients for both, the determination of descent directions and the approximation of the objective function's curvature. Since stochastic gradients can be computed at manageable computational cost RES is realizable and retains the convergence rate advantages of its deterministic counterparts. Convergence results show that lower and upper bounds on the Hessian egeinvalues of the sample functions are sufficient to guarantee convergence to optimal arguments. Numerical experiments showcase reductions in convergence time relative to stochastic gradient descent algorithms and non-regularized stochastic versions of BFGS. An application of RES to the implementation of support vector machines is developed.

* 13 pages 

  Click for Model/Code and Paper
On the Convergence Theory of Gradient-Based Model-Agnostic Meta-Learning Algorithms

Sep 25, 2019
Alireza Fallah, Aryan Mokhtari, Asuman Ozdaglar

In this paper, we study the convergence of a class of gradient-based Model-Agnostic Meta-Learning (MAML) methods and characterize their overall computational complexity as well as their best achievable level of accuracy in terms of gradient norm for nonconvex loss functions. In particular, we start with the MAML algorithm and its first order approximation (FO-MAML) and highlight the challenges that emerge in their analysis. By overcoming these challenges not only we provide the first theoretical guarantees for MAML and FO-MAML in nonconvex settings, but also we answer some of the unanswered questions for the implementation of these algorithms including how to choose their learning rate (stepsize) and the batch size for both tasks and datasets corresponding to tasks. In particular, we show that MAML can find an $\epsilon$-first-order stationary point for any positive $\epsilon$ after at most $\mathcal{O}(1/\epsilon^2)$ iterations at the expense of requiring second-order information. We also show that the FO-MAML method which ignores the second-order information required in the update of MAML cannot achieve any small desired level of accuracy, i.e, FO-MAML cannot find an $\epsilon$-first-order stationary point for any positive $\epsilon$. We further propose a new variant of the MAML algorithm called Hessian-free MAML (HF-MAML) which preserves all theoretical guarantees of MAML, without requiring access to the second-order information of loss functions.

* 33 pages 

  Click for Model/Code and Paper
Proximal Point Approximations Achieving a Convergence Rate of $\mathcal{O}(1/k)$ for Smooth Convex-Concave Saddle Point Problems: Optimistic Gradient and Extra-gradient Methods

Jun 03, 2019
Aryan Mokhtari, Asuman Ozdaglar, Sarath Pattathil

In this paper we analyze the iteration complexity of the optimistic gradient descent-ascent (OGDA) method as well as the extra-gradient (EG) method for finding a saddle point of a convex-concave unconstrained min-max problem. To do so, we first show that both OGDA and EG can be interpreted as approximate variants of the proximal point method. We then exploit this interpretation to show that both of these algorithms achieve a convergence rate of $\mathcal{O}(1/k)$ for smooth convex-concave saddle point problems. Our theoretical analysis is of interest as it provides a simple convergence analysis for the EG algorithm in terms of objective function value without using compactness assumption. Moreover, it provides the first convergence guarantee for OGDA in the general convex-concave setting.

* 15 pages 

  Click for Model/Code and Paper
A Unified Analysis of Extra-gradient and Optimistic Gradient Methods for Saddle Point Problems: Proximal Point Approach

Jan 24, 2019
Aryan Mokhtari, Asuman Ozdaglar, Sarath Pattathil

We consider solving convex-concave saddle point problems. We focus on two variants of gradient decent-ascent algorithms, Extra-gradient (EG) and Optimistic Gradient (OGDA) methods, and show that they admit a unified analysis as approximations of the classical proximal point method for solving saddle-point problems. This viewpoint enables us to generalize EG (in terms of extrapolation steps) and OGDA (in terms of parameters) and obtain new convergence rate results for these algorithms for the bilinear case as well as the strongly convex-concave case.

* 31 pages, 3 figures 

  Click for Model/Code and Paper
Escaping Saddle Points in Constrained Optimization

Oct 09, 2018
Aryan Mokhtari, Asuman Ozdaglar, Ali Jadbabaie

In this paper, we study the problem of escaping from saddle points in smooth nonconvex optimization problems subject to a convex set $\mathcal{C}$. We propose a generic framework that yields convergence to a second-order stationary point of the problem, if the convex set $\mathcal{C}$ is simple for a quadratic objective function. Specifically, our results hold if one can find a $\rho$-approximate solution of a quadratic program subject to $\mathcal{C}$ in polynomial time, where $\rho<1$ is a positive constant that depends on the structure of the set $\mathcal{C}$. Under this condition, we show that the sequence of iterates generated by the proposed framework reaches an $(\epsilon,\gamma)$-second order stationary point (SOSP) in at most $\mathcal{O}(\max\{\epsilon^{-2},\rho^{-3}\gamma^{-3}\})$ iterations. We further characterize the overall complexity of reaching an SOSP when the convex set $\mathcal{C}$ can be written as a set of quadratic constraints and the objective function Hessian has a specific structure over the convex set $\mathcal{C}$. Finally, we extend our results to the stochastic setting and characterize the number of stochastic gradient and Hessian evaluations to reach an $(\epsilon,\gamma)$-SOSP.


  Click for Model/Code and Paper
Stochastic Conditional Gradient Methods: From Convex Minimization to Submodular Maximization

Apr 24, 2018
Aryan Mokhtari, Hamed Hassani, Amin Karbasi

This paper considers stochastic optimization problems for a large class of objective functions, including convex and continuous submodular. Stochastic proximal gradient methods have been widely used to solve such problems; however, their applicability remains limited when the problem dimension is large and the projection onto a convex set is costly. Instead, stochastic conditional gradient methods are proposed as an alternative solution relying on (i) Approximating gradients via a simple averaging technique requiring a single stochastic gradient evaluation per iteration; (ii) Solving a linear program to compute the descent/ascent direction. The averaging technique reduces the noise of gradient approximations as time progresses, and replacing projection step in proximal methods by a linear program lowers the computational complexity of each iteration. We show that under convexity and smoothness assumptions, our proposed method converges to the optimal objective function value at a sublinear rate of $O(1/t^{1/3})$. Further, for a monotone and continuous DR-submodular function and subject to a general convex body constraint, we prove that our proposed method achieves a $((1-1/e)OPT-\eps)$ guarantee with $O(1/\eps^3)$ stochastic gradient computations. This guarantee matches the known hardness results and closes the gap between deterministic and stochastic continuous submodular maximization. Additionally, we obtain $((1/e)OPT -\eps)$ guarantee after using $O(1/\eps^3)$ stochastic gradients for the case that the objective function is continuous DR-submodular but non-monotone and the constraint set is down-closed. By using stochastic continuous optimization as an interface, we provide the first $(1-1/e)$ tight approximation guarantee for maximizing a monotone but stochastic submodular set function subject to a matroid constraint and $(1/e)$ approximation guarantee for the non-monotone case.

* arXiv admin note: text overlap with arXiv:1711.01660 

  Click for Model/Code and Paper
Surpassing Gradient Descent Provably: A Cyclic Incremental Method with Linear Convergence Rate

Feb 07, 2018
Aryan Mokhtari, Mert Gürbüzbalaban, Alejandro Ribeiro

Recently, there has been growing interest in developing optimization methods for solving large-scale machine learning problems. Most of these problems boil down to the problem of minimizing an average of a finite set of smooth and strongly convex functions where the number of functions $n$ is large. Gradient descent method (GD) is successful in minimizing convex problems at a fast linear rate; however, it is not applicable to the considered large-scale optimization setting because of the high computational complexity. Incremental methods resolve this drawback of gradient methods by replacing the required gradient for the descent direction with an incremental gradient approximation. They operate by evaluating one gradient per iteration and executing the average of the $n$ available gradients as a gradient approximate. Although, incremental methods reduce the computational cost of GD, their convergence rates do not justify their advantage relative to GD in terms of the total number of gradient evaluations until convergence. In this paper, we introduce a Double Incremental Aggregated Gradient method (DIAG) that computes the gradient of only one function at each iteration, which is chosen based on a cyclic scheme, and uses the aggregated average gradient of all the functions to approximate the full gradient. The iterates of the proposed DIAG method uses averages of both iterates and gradients in oppose to classic incremental methods that utilize gradient averages but do not utilize iterate averages. We prove that not only the proposed DIAG method converges linearly to the optimal solution, but also its linear convergence factor justifies the advantage of incremental methods on GD. In particular, we prove that the worst case performance of DIAG is better than the worst case performance of GD.


  Click for Model/Code and Paper
Conditional Gradient Method for Stochastic Submodular Maximization: Closing the Gap

Nov 05, 2017
Aryan Mokhtari, Hamed Hassani, Amin Karbasi

In this paper, we study the problem of \textit{constrained} and \textit{stochastic} continuous submodular maximization. Even though the objective function is not concave (nor convex) and is defined in terms of an expectation, we develop a variant of the conditional gradient method, called \alg, which achieves a \textit{tight} approximation guarantee. More precisely, for a monotone and continuous DR-submodular function and subject to a \textit{general} convex body constraint, we prove that \alg achieves a $[(1-1/e)\text{OPT} -\eps]$ guarantee (in expectation) with $\mathcal{O}{(1/\eps^3)}$ stochastic gradient computations. This guarantee matches the known hardness results and closes the gap between deterministic and stochastic continuous submodular maximization. By using stochastic continuous optimization as an interface, we also provide the first $(1-1/e)$ tight approximation guarantee for maximizing a \textit{monotone but stochastic} submodular \textit{set} function subject to a general matroid constraint.


  Click for Model/Code and Paper
Large Scale Empirical Risk Minimization via Truncated Adaptive Newton Method

May 22, 2017
Mark Eisen, Aryan Mokhtari, Alejandro Ribeiro

We consider large scale empirical risk minimization (ERM) problems, where both the problem dimension and variable size is large. In these cases, most second order methods are infeasible due to the high cost in both computing the Hessian over all samples and computing its inverse in high dimensions. In this paper, we propose a novel adaptive sample size second-order method, which reduces the cost of computing the Hessian by solving a sequence of ERM problems corresponding to a subset of samples and lowers the cost of computing the Hessian inverse using a truncated eigenvalue decomposition. We show that while we geometrically increase the size of the training set at each stage, a single iteration of the truncated Newton method is sufficient to solve the new ERM within its statistical accuracy. Moreover, for a large number of samples we are allowed to double the size of the training set at each stage, and the proposed method subsequently reaches the statistical accuracy of the full training set approximately after two effective passes. In addition to this theoretical result, we show empirically on a number of well known data sets that the proposed truncated adaptive sample size algorithm outperforms stochastic alternatives for solving ERM problems.


  Click for Model/Code and Paper
IQN: An Incremental Quasi-Newton Method with Local Superlinear Convergence Rate

Mar 27, 2017
Aryan Mokhtari, Mark Eisen, Alejandro Ribeiro

The problem of minimizing an objective that can be written as the sum of a set of $n$ smooth and strongly convex functions is considered. The Incremental Quasi-Newton (IQN) method proposed here belongs to the family of stochastic and incremental methods that have a cost per iteration independent of $n$. IQN iterations are a stochastic version of BFGS iterations that use memory to reduce the variance of stochastic approximations. The convergence properties of IQN bridge a gap between deterministic and stochastic quasi-Newton methods. Deterministic quasi-Newton methods exploit the possibility of approximating the Newton step using objective gradient differences. They are appealing because they have a smaller computational cost per iteration relative to Newton's method and achieve a superlinear convergence rate under customary regularity assumptions. Stochastic quasi-Newton methods utilize stochastic gradient differences in lieu of actual gradient differences. This makes their computational cost per iteration independent of the number of objective functions $n$. However, existing stochastic quasi-Newton methods have sublinear or linear convergence at best. IQN is the first stochastic quasi-Newton method proven to converge superlinearly in a local neighborhood of the optimal solution. IQN differs from state-of-the-art incremental quasi-Newton methods in three aspects: (i) The use of aggregated information of variables, gradients, and quasi-Newton Hessian approximation matrices to reduce the noise of gradient and Hessian approximations. (ii) The approximation of each individual function by its Taylor's expansion in which the linear and quadratic terms are evaluated with respect to the same iterate. (iii) The use of a cyclic scheme to update the functions in lieu of a random selection routine. We use these fundamental properties of IQN to establish its local superlinear convergence rate.


  Click for Model/Code and Paper
A Class of Parallel Doubly Stochastic Algorithms for Large-Scale Learning

Jun 15, 2016
Aryan Mokhtari, Alec Koppel, Alejandro Ribeiro

We consider learning problems over training sets in which both, the number of training examples and the dimension of the feature vectors, are large. To solve these problems we propose the random parallel stochastic algorithm (RAPSA). We call the algorithm random parallel because it utilizes multiple parallel processors to operate on a randomly chosen subset of blocks of the feature vector. We call the algorithm stochastic because processors choose training subsets uniformly at random. Algorithms that are parallel in either of these dimensions exist, but RAPSA is the first attempt at a methodology that is parallel in both the selection of blocks and the selection of elements of the training set. In RAPSA, processors utilize the randomly chosen functions to compute the stochastic gradient component associated with a randomly chosen block. The technical contribution of this paper is to show that this minimally coordinated algorithm converges to the optimal classifier when the training objective is convex. Moreover, we present an accelerated version of RAPSA (ARAPSA) that incorporates the objective function curvature information by premultiplying the descent direction by a Hessian approximation matrix. We further extend the results for asynchronous settings and show that if the processors perform their updates without any coordination the algorithms are still convergent to the optimal argument. RAPSA and its extensions are then numerically evaluated on a linear estimation problem and a binary image classification task using the MNIST handwritten digit dataset.

* arXiv admin note: substantial text overlap with arXiv:1603.06782 

  Click for Model/Code and Paper
A Decentralized Quasi-Newton Method for Dual Formulations of Consensus Optimization

Mar 23, 2016
Mark Eisen, Aryan Mokhtari, Alejandro Ribeiro

This paper considers consensus optimization problems where each node of a network has access to a different summand of an aggregate cost function. Nodes try to minimize the aggregate cost function, while they exchange information only with their neighbors. We modify the dual decomposition method to incorporate a curvature correction inspired by the Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton method. The resulting dual D-BFGS method is a fully decentralized algorithm in which nodes approximate curvature information of themselves and their neighbors through the satisfaction of a secant condition. Dual D-BFGS is of interest in consensus optimization problems that are not well conditioned, making first order decentralized methods ineffective, and in which second order information is not readily available, making decentralized second order methods infeasible. Asynchronous implementation is discussed and convergence of D-BFGS is established formally for both synchronous and asynchronous implementations. Performance advantages relative to alternative decentralized algorithms are shown numerically.

* 8 pages 

  Click for Model/Code and Paper
Doubly Random Parallel Stochastic Methods for Large Scale Learning

Mar 22, 2016
Aryan Mokhtari, Alec Koppel, Alejandro Ribeiro

We consider learning problems over training sets in which both, the number of training examples and the dimension of the feature vectors, are large. To solve these problems we propose the random parallel stochastic algorithm (RAPSA). We call the algorithm random parallel because it utilizes multiple processors to operate in a randomly chosen subset of blocks of the feature vector. We call the algorithm parallel stochastic because processors choose elements of the training set randomly and independently. Algorithms that are parallel in either of these dimensions exist, but RAPSA is the first attempt at a methodology that is parallel in both, the selection of blocks and the selection of elements of the training set. In RAPSA, processors utilize the randomly chosen functions to compute the stochastic gradient component associated with a randomly chosen block. The technical contribution of this paper is to show that this minimally coordinated algorithm converges to the optimal classifier when the training objective is convex. In particular, we show that: (i) When using decreasing stepsizes, RAPSA converges almost surely over the random choice of blocks and functions. (ii) When using constant stepsizes, convergence is to a neighborhood of optimality with a rate that is linear in expectation. RAPSA is numerically evaluated on the MNIST digit recognition problem.


  Click for Model/Code and Paper
Stochastic Conditional Gradient++

Feb 19, 2019
Hamed Hassani, Amin Karbasi, Aryan Mokhtari, Zebang Shen

In this paper, we develop Stochastic Continuous Greedy++ (SCG++), the first efficient variant of a conditional gradient method for maximizing a continuous submodular function subject to a convex constraint. Concretely, for a monotone and continuous DR-submodular function, SCG++ achieves a tight $[(1-1/e)\text{OPT} -\epsilon]$ solution while using $O(1/\epsilon^2)$ stochastic oracle queries and $O(1/\epsilon)$ calls to the linear optimization oracle. The best previously known algorithms either achieve a suboptimal $[(1/2)\text{OPT} -\epsilon]$ solution with $O(1/\epsilon^2)$ stochastic gradients or the tight $[(1-1/e)\text{OPT} -\epsilon]$ solution with suboptimal $O(1/\epsilon^3)$ stochastic gradients. SCG++ enjoys optimality in terms of both approximation guarantee and stochastic stochastic oracle queries. Our novel variance reduction method naturally extends to stochastic convex minimization. More precisely, we develop Stochastic Frank-Wolfe++ (SFW++) that achieves an $\epsilon$-approximate optimum with only $O(1/\epsilon)$ calls to the linear optimization oracle while using $O(1/\epsilon^2)$ stochastic oracle queries in total. Therefore, SFW++ is the first efficient projection-free algorithm that achieves the optimum complexity $O(1/\epsilon^2)$ in terms of stochastic oracle queries.

* Submitted to COLT 2019 

  Click for Model/Code and Paper
Direct Runge-Kutta Discretization Achieves Acceleration

Sep 14, 2018
Jingzhao Zhang, Aryan Mokhtari, Suvrit Sra, Ali Jadbabaie

We study gradient-based optimization methods obtained by directly discretizing a second-order ordinary differential equation (ODE) related to the continuous limit of Nesterov's accelerated gradient method. When the function is smooth enough, we show that acceleration can be achieved by a stable discretization of this ODE using standard Runge-Kutta integrators. Specifically, we prove that under Lipschitz-gradient, convexity and order-$(s+2)$ differentiability assumptions, the sequence of iterates generated by discretizing the proposed second-order ODE converges to the optimal solution at a rate of $\mathcal{O}({N^{-2\frac{s}{s+1}}})$, where $s$ is the order of the Runge-Kutta numerical integrator. Furthermore, we introduce a new local flatness condition on the objective, under which rates even faster than $\mathcal{O}(N^{-2})$ can be achieved with low-order integrators and only gradient information. Notably, this flatness condition is satisfied by several standard loss functions used in machine learning. We provide numerical experiments that verify the theoretical rates predicted by our results.

* 24 pages. 4 figures 

  Click for Model/Code and Paper
Quantized Decentralized Consensus Optimization

Jun 29, 2018
Amirhossein Reisizadeh, Aryan Mokhtari, Hamed Hassani, Ramtin Pedarsani

We consider the problem of decentralized consensus optimization, where the sum of $n$ convex functions are minimized over $n$ distributed agents that form a connected network. In particular, we consider the case that the communicated local decision variables among nodes are quantized in order to alleviate the communication bottleneck in distributed optimization. We propose the Quantized Decentralized Gradient Descent (QDGD) algorithm, in which nodes update their local decision variables by combining the quantized information received from their neighbors with their local information. We prove that under standard strong convexity and smoothness assumptions for the objective function, QDGD achieves a vanishing mean solution error. To the best of our knowledge, this is the first algorithm that achieves vanishing consensus error in the presence of quantization noise. Moreover, we provide simulation results that show tight agreement between our derived theoretical convergence rate and the experimental results.


  Click for Model/Code and Paper