Scaling the Scattering Transform: Deep Hybrid Networks

Apr 04, 2017

Edouard Oyallon, Eugene Belilovsky, Sergey Zagoruyko

Apr 04, 2017

Edouard Oyallon, Eugene Belilovsky, Sergey Zagoruyko

**Click to Read Paper**

Testing for Differences in Gaussian Graphical Models: Applications to Brain Connectivity

Nov 18, 2016

Eugene Belilovsky, Gaël Varoquaux, Matthew B. Blaschko

Functional brain networks are well described and estimated from data with Gaussian Graphical Models (GGMs), e.g. using sparse inverse covariance estimators. Comparing functional connectivity of subjects in two populations calls for comparing these estimated GGMs. Our goal is to identify differences in GGMs known to have similar structure. We characterize the uncertainty of differences with confidence intervals obtained using a parametric distribution on parameters of a sparse estimator. Sparse penalties enable statistical guarantees and interpretable models even in high-dimensional and low-sample settings. Characterizing the distributions of sparse models is inherently challenging as the penalties produce a biased estimator. Recent work invokes the sparsity assumptions to effectively remove the bias from a sparse estimator such as the lasso. These distributions can be used to give confidence intervals on edges in GGMs, and by extension their differences. However, in the case of comparing GGMs, these estimators do not make use of any assumed joint structure among the GGMs. Inspired by priors from brain functional connectivity we derive the distribution of parameter differences under a joint penalty when parameters are known to be sparse in the difference. This leads us to introduce the debiased multi-task fused lasso, whose distribution can be characterized in an efficient manner. We then show how the debiased lasso and multi-task fused lasso can be used to obtain confidence intervals on edge differences in GGMs. We validate the techniques proposed on a set of synthetic examples as well as neuro-imaging dataset created for the study of autism.
Nov 18, 2016

Eugene Belilovsky, Gaël Varoquaux, Matthew B. Blaschko

**Click to Read Paper**

Compressing the Input for CNNs with the First-Order Scattering Transform

Sep 27, 2018

Edouard Oyallon, Eugene Belilovsky, Sergey Zagoruyko, Michal Valko

Sep 27, 2018

Edouard Oyallon, Eugene Belilovsky, Sergey Zagoruyko, Michal Valko

**Click to Read Paper**

Learning to Discover Sparse Graphical Models

Aug 03, 2017

Eugene Belilovsky, Kyle Kastner, Gaël Varoquaux, Matthew Blaschko

We consider structure discovery of undirected graphical models from observational data. Inferring likely structures from few examples is a complex task often requiring the formulation of priors and sophisticated inference procedures. Popular methods rely on estimating a penalized maximum likelihood of the precision matrix. However, in these approaches structure recovery is an indirect consequence of the data-fit term, the penalty can be difficult to adapt for domain-specific knowledge, and the inference is computationally demanding. By contrast, it may be easier to generate training samples of data that arise from graphs with the desired structure properties. We propose here to leverage this latter source of information as training data to learn a function, parametrized by a neural network that maps empirical covariance matrices to estimated graph structures. Learning this function brings two benefits: it implicitly models the desired structure or sparsity properties to form suitable priors, and it can be tailored to the specific problem of edge structure discovery, rather than maximizing data likelihood. Applying this framework, we find our learnable graph-discovery method trained on synthetic data generalizes well: identifying relevant edges in both synthetic and real data, completely unknown at training time. We find that on genetics, brain imaging, and simulation data we obtain performance generally superior to analytical methods.
Aug 03, 2017

Eugene Belilovsky, Kyle Kastner, Gaël Varoquaux, Matthew Blaschko

**Click to Read Paper**

Blindfold Baselines for Embodied QA

Nov 12, 2018

Ankesh Anand, Eugene Belilovsky, Kyle Kastner, Hugo Larochelle, Aaron Courville

Nov 12, 2018

Ankesh Anand, Eugene Belilovsky, Kyle Kastner, Hugo Larochelle, Aaron Courville

**Click to Read Paper**

A Test of Relative Similarity For Model Selection in Generative Models

Feb 15, 2016

Wacha Bounliphone, Eugene Belilovsky, Matthew B. Blaschko, Ioannis Antonoglou, Arthur Gretton

Feb 15, 2016

Wacha Bounliphone, Eugene Belilovsky, Matthew B. Blaschko, Ioannis Antonoglou, Arthur Gretton

**Click to Read Paper**

Fast Non-Parametric Tests of Relative Dependency and Similarity

Nov 17, 2016

Wacha Bounliphone, Eugene Belilovsky, Arthur Tenenhaus, Ioannis Antonoglou, Arthur Gretton, Matthew B. Blashcko

Nov 17, 2016

Wacha Bounliphone, Eugene Belilovsky, Arthur Tenenhaus, Ioannis Antonoglou, Arthur Gretton, Matthew B. Blashcko

**Click to Read Paper**

Scattering Networks for Hybrid Representation Learning

Sep 17, 2018

Edouard Oyallon, Sergey Zagoruyko, Gabriel Huang, Nikos Komodakis, Simon Lacoste-Julien, Matthew Blaschko, Eugene Belilovsky

Sep 17, 2018

Edouard Oyallon, Sergey Zagoruyko, Gabriel Huang, Nikos Komodakis, Simon Lacoste-Julien, Matthew Blaschko, Eugene Belilovsky

**Click to Read Paper**