Learning Robust Rewards with Adversarial Inverse Reinforcement Learning

Aug 13, 2018

Justin Fu, Katie Luo, Sergey Levine

Aug 13, 2018

Justin Fu, Katie Luo, Sergey Levine

**Click to Read Paper and Get Code**

One-Shot Learning of Manipulation Skills with Online Dynamics Adaptation and Neural Network Priors

Aug 11, 2016

Justin Fu, Sergey Levine, Pieter Abbeel

Aug 11, 2016

Justin Fu, Sergey Levine, Pieter Abbeel

**Click to Read Paper and Get Code**

Diagnosing Bottlenecks in Deep Q-learning Algorithms

Feb 26, 2019

Justin Fu, Aviral Kumar, Matthew Soh, Sergey Levine

Feb 26, 2019

Justin Fu, Aviral Kumar, Matthew Soh, Sergey Levine

**Click to Read Paper and Get Code**

From Language to Goals: Inverse Reinforcement Learning for Vision-Based Instruction Following

Feb 20, 2019

Justin Fu, Anoop Korattikara, Sergey Levine, Sergio Guadarrama

Feb 20, 2019

Justin Fu, Anoop Korattikara, Sergey Levine, Sergio Guadarrama

**Click to Read Paper and Get Code**

EX2: Exploration with Exemplar Models for Deep Reinforcement Learning

May 27, 2017

Justin Fu, John D. Co-Reyes, Sergey Levine

May 27, 2017

Justin Fu, John D. Co-Reyes, Sergey Levine

**Click to Read Paper and Get Code**

Variational Inverse Control with Events: A General Framework for Data-Driven Reward Definition

May 31, 2018

Justin Fu, Avi Singh, Dibya Ghosh, Larry Yang, Sergey Levine

May 31, 2018

Justin Fu, Avi Singh, Dibya Ghosh, Larry Yang, Sergey Levine

* First two authors contributed equally. Website: https://sites.google.com/view/inverse-event

**Click to Read Paper and Get Code**

Generalizing Skills with Semi-Supervised Reinforcement Learning

Mar 09, 2017

Chelsea Finn, Tianhe Yu, Justin Fu, Pieter Abbeel, Sergey Levine

Mar 09, 2017

Chelsea Finn, Tianhe Yu, Justin Fu, Pieter Abbeel, Sergey Levine

* ICLR 2017

**Click to Read Paper and Get Code**

Learning to Navigate: Exploiting Deep Networks to Inform Sample-Based Planning During Vision-Based Navigation

Jan 16, 2018

Justin S. Smith, Jin-Ha Hwang, Fu-Jen Chu, Patricio A. Vela

Jan 16, 2018

Justin S. Smith, Jin-Ha Hwang, Fu-Jen Chu, Patricio A. Vela

* 7 pages, 6 figures

**Click to Read Paper and Get Code**

**Click to Read Paper and Get Code**

**Click to Read Paper and Get Code**

Maximum Likelihood Learning With Arbitrary Treewidth via Fast-Mixing Parameter Sets

Oct 30, 2015

Justin Domke

Oct 30, 2015

Justin Domke

* Advances in Neural Information Processing Systems 2015

**Click to Read Paper and Get Code**

* Advances in Neural Information Processing Systems 2013

**Click to Read Paper and Get Code**

Graphical models trained using maximum likelihood are a common tool for probabilistic inference of marginal distributions. However, this approach suffers difficulties when either the inference process or the model is approximate. In this paper, the inference process is first defined to be the minimization of a convex function, inspired by free energy approximations. Learning is then done directly in terms of the performance of the inference process at univariate marginal prediction. The main novelty is that this is a direct minimization of emperical risk, where the risk measures the accuracy of predicted marginals.

* Appears in Proceedings of the Twenty-Fourth Conference on Uncertainty in Artificial Intelligence (UAI2008)

* Appears in Proceedings of the Twenty-Fourth Conference on Uncertainty in Artificial Intelligence (UAI2008)

**Click to Read Paper and Get Code**
DGM: A deep learning algorithm for solving partial differential equations

Sep 05, 2018

Justin Sirignano, Konstantinos Spiliopoulos

High-dimensional PDEs have been a longstanding computational challenge. We propose to solve high-dimensional PDEs by approximating the solution with a deep neural network which is trained to satisfy the differential operator, initial condition, and boundary conditions. Our algorithm is meshfree, which is key since meshes become infeasible in higher dimensions. Instead of forming a mesh, the neural network is trained on batches of randomly sampled time and space points. The algorithm is tested on a class of high-dimensional free boundary PDEs, which we are able to accurately solve in up to $200$ dimensions. The algorithm is also tested on a high-dimensional Hamilton-Jacobi-Bellman PDE and Burgers' equation. The deep learning algorithm approximates the general solution to the Burgers' equation for a continuum of different boundary conditions and physical conditions (which can be viewed as a high-dimensional space). We call the algorithm a "Deep Galerkin Method (DGM)" since it is similar in spirit to Galerkin methods, with the solution approximated by a neural network instead of a linear combination of basis functions. In addition, we prove a theorem regarding the approximation power of neural networks for a class of quasilinear parabolic PDEs.
Sep 05, 2018

Justin Sirignano, Konstantinos Spiliopoulos

* Deep learning, machine learning, partial differential equations

**Click to Read Paper and Get Code**

Solving Equations of Random Convex Functions via Anchored Regression

Aug 13, 2018

Sohail Bahmani, Justin Romberg

Aug 13, 2018

Sohail Bahmani, Justin Romberg

**Click to Read Paper and Get Code**

**Click to Read Paper and Get Code**

Stochastic Gradient Descent in Continuous Time: A Central Limit Theorem

Nov 02, 2017

Justin Sirignano, Konstantinos Spiliopoulos

Stochastic gradient descent in continuous time (SGDCT) provides a computationally efficient method for the statistical learning of continuous-time models, which are widely used in science, engineering, and finance. The SGDCT algorithm follows a (noisy) descent direction along a continuous stream of data. The parameter updates occur in continuous time and satisfy a stochastic differential equation. This paper analyzes the asymptotic convergence rate of the SGDCT algorithm by proving a central limit theorem for strongly convex objective functions and, under slightly stronger conditions, for non-convex objective functions as well. An L$^p$ convergence rate is also proven for the algorithm in the strongly convex case.
Nov 02, 2017

Justin Sirignano, Konstantinos Spiliopoulos

**Click to Read Paper and Get Code**

Stochastic Gradient Descent in Continuous Time

Oct 29, 2017

Justin Sirignano, Konstantinos Spiliopoulos

Oct 29, 2017

Justin Sirignano, Konstantinos Spiliopoulos

**Click to Read Paper and Get Code**

AI Programmer: Autonomously Creating Software Programs Using Genetic Algorithms

Sep 17, 2017

Kory Becker, Justin Gottschlich

Sep 17, 2017

Kory Becker, Justin Gottschlich

**Click to Read Paper and Get Code**

Phase Retrieval Meets Statistical Learning Theory: A Flexible Convex Relaxation

Mar 16, 2017

Sohail Bahmani, Justin Romberg

Mar 16, 2017

Sohail Bahmani, Justin Romberg

* Accepted in AISTATS 2017. Extended the discussion of related work and added a few more references. Clarified some of the statements and notations

**Click to Read Paper and Get Code**