* PhD thesis. Source code available at https://github.com/glouppe/phd-thesis

**Click to Read Paper and Get Code**

**Click to Read Paper and Get Code**

Gradient Energy Matching for Distributed Asynchronous Gradient Descent

May 22, 2018

Joeri Hermans, Gilles Louppe

May 22, 2018

Joeri Hermans, Gilles Louppe

**Click to Read Paper and Get Code**

Likelihood-free MCMC with Approximate Likelihood Ratios

Mar 10, 2019

Joeri Hermans, Volodimir Begy, Gilles Louppe

Mar 10, 2019

Joeri Hermans, Volodimir Begy, Gilles Louppe

* 13 pages, 10 figures

**Click to Read Paper and Get Code**

Recurrent machines for likelihood-free inference

Nov 30, 2018

Arthur Pesah, Antoine Wehenkel, Gilles Louppe

Nov 30, 2018

Arthur Pesah, Antoine Wehenkel, Gilles Louppe

* NeurIPS 2018 Workshop on Meta-learning (MetaLearn 2018)

**Click to Read Paper and Get Code**

Adversarial Variational Optimization of Non-Differentiable Simulators

Oct 05, 2018

Gilles Louppe, Joeri Hermans, Kyle Cranmer

Oct 05, 2018

Gilles Louppe, Joeri Hermans, Kyle Cranmer

**Click to Read Paper and Get Code**

Several techniques for domain adaptation have been proposed to account for differences in the distribution of the data used for training and testing. The majority of this work focuses on a binary domain label. Similar problems occur in a scientific context where there may be a continuous family of plausible data generation processes associated to the presence of systematic uncertainties. Robust inference is possible if it is based on a pivot -- a quantity whose distribution does not depend on the unknown values of the nuisance parameters that parametrize this family of data generation processes. In this work, we introduce and derive theoretical results for a training procedure based on adversarial networks for enforcing the pivotal property (or, equivalently, fairness with respect to continuous attributes) on a predictive model. The method includes a hyperparameter to control the trade-off between accuracy and robustness. We demonstrate the effectiveness of this approach with a toy example and examples from particle physics.

* v1: Original submission. v2: Fixed references. v3: version submitted to NIPS'2017. Code available at https://github.com/glouppe/paper-learning-to-pivot

* v1: Original submission. v2: Fixed references. v3: version submitted to NIPS'2017. Code available at https://github.com/glouppe/paper-learning-to-pivot

**Click to Read Paper and Get Code**
Approximating Likelihood Ratios with Calibrated Discriminative Classifiers

Mar 18, 2016

Kyle Cranmer, Juan Pavez, Gilles Louppe

Mar 18, 2016

Kyle Cranmer, Juan Pavez, Gilles Louppe

* 35 pages, 5 figures

**Click to Read Paper and Get Code**

Approximating two value functions instead of one: towards characterizing a new family of Deep Reinforcement Learning algorithms

Sep 01, 2019

Matthia Sabatelli, Gilles Louppe, Pierre Geurts, Marco A. Wiering

This paper makes one step forward towards characterizing a new family of \textit{model-free} Deep Reinforcement Learning (DRL) algorithms. The aim of these algorithms is to jointly learn an approximation of the state-value function ($V$), alongside an approximation of the state-action value function ($Q$). Our analysis starts with a thorough study of the Deep Quality-Value Learning (DQV) algorithm, a DRL algorithm which has been shown to outperform popular techniques such as Deep-Q-Learning (DQN) and Double-Deep-Q-Learning (DDQN) \cite{sabatelli2018deep}. Intending to investigate why DQV's learning dynamics allow this algorithm to perform so well, we formulate a set of research questions which help us characterize a new family of DRL algorithms. Among our results, we present some specific cases in which DQV's performance can get harmed and introduce a novel \textit{off-policy} DRL algorithm, called DQV-Max, which can outperform DQV. We then study the behavior of the $V$ and $Q$ functions that are learned by DQV and DQV-Max and show that both algorithms might perform so well on several DRL test-beds because they are less prone to suffer from the overestimation bias of the $Q$ function.
Sep 01, 2019

Matthia Sabatelli, Gilles Louppe, Pierre Geurts, Marco A. Wiering

**Click to Read Paper and Get Code**

Deep Quality-Value (DQV) Learning

Oct 10, 2018

Matthia Sabatelli, Gilles Louppe, Pierre Geurts, Marco A. Wiering

Oct 10, 2018

Matthia Sabatelli, Gilles Louppe, Pierre Geurts, Marco A. Wiering

**Click to Read Paper and Get Code**

Mining gold from implicit models to improve likelihood-free inference

Oct 09, 2018

Johann Brehmer, Gilles Louppe, Juan Pavez, Kyle Cranmer

Oct 09, 2018

Johann Brehmer, Gilles Louppe, Juan Pavez, Kyle Cranmer

* Code available at https://github.com/johannbrehmer/simulator-mining-example . v2: Fixed typos. v3: Expanded discussion, added Lotka-Volterra example

**Click to Read Paper and Get Code**

A Guide to Constraining Effective Field Theories with Machine Learning

Jul 26, 2018

Johann Brehmer, Kyle Cranmer, Gilles Louppe, Juan Pavez

Jul 26, 2018

Johann Brehmer, Kyle Cranmer, Gilles Louppe, Juan Pavez

* Phys. Rev. D 98, 052004 (2018)

* See also the companion publication "Constraining Effective Field Theories with Machine Learning" at arXiv:1805.00013, a brief introduction presenting the key ideas. The code for these studies is available at https://github.com/johannbrehmer/higgs_inference . v2: Added references. v3: Improved description of algorithms, added references. v4: Clarified text, added references

**Click to Read Paper and Get Code**

Constraining Effective Field Theories with Machine Learning

Jul 26, 2018

Johann Brehmer, Kyle Cranmer, Gilles Louppe, Juan Pavez

Jul 26, 2018

Johann Brehmer, Kyle Cranmer, Gilles Louppe, Juan Pavez

* Phys. Rev. Lett. 121, 111801 (2018)

* See also the companion publication "A Guide to Constraining Effective Field Theories with Machine Learning" at arXiv:1805.00020, an in-depth analysis of machine learning techniques for LHC measurements. The code for these studies is available at https://github.com/johannbrehmer/higgs_inference . v2: New schematic figure explaining the new algorithms, added references. v3, v4: Added references

**Click to Read Paper and Get Code**

QCD-Aware Recursive Neural Networks for Jet Physics

Jul 13, 2018

Gilles Louppe, Kyunghyun Cho, Cyril Becot, Kyle Cranmer

Jul 13, 2018

Gilles Louppe, Kyunghyun Cho, Cyril Becot, Kyle Cranmer

* 16 pages, 5 figures, 3 appendices, corresponding code at https://github.com/glouppe/recnn

**Click to Read Paper and Get Code**

Ethnicity sensitive author disambiguation using semi-supervised learning

May 04, 2016

Gilles Louppe, Hussein Al-Natsheh, Mateusz Susik, Eamonn Maguire

May 04, 2016

Gilles Louppe, Hussein Al-Natsheh, Mateusz Susik, Eamonn Maguire

**Click to Read Paper and Get Code**

Likelihood-free inference with an improved cross-entropy estimator

Aug 02, 2018

Markus Stoye, Johann Brehmer, Gilles Louppe, Juan Pavez, Kyle Cranmer

Aug 02, 2018

Markus Stoye, Johann Brehmer, Gilles Louppe, Juan Pavez, Kyle Cranmer

* 8 pages, 3 figures

**Click to Read Paper and Get Code**

Random Subspace with Trees for Feature Selection Under Memory Constraints

Sep 06, 2017

Antonio Sutera, Célia Châtel, Gilles Louppe, Louis Wehenkel, Pierre Geurts

Sep 06, 2017

Antonio Sutera, Célia Châtel, Gilles Louppe, Louis Wehenkel, Pierre Geurts

**Click to Read Paper and Get Code**

Mining for Dark Matter Substructure: Inferring subhalo population properties from strong lenses with machine learning

Sep 04, 2019

Johann Brehmer, Siddharth Mishra-Sharma, Joeri Hermans, Gilles Louppe, Kyle Cranmer

The subtle and unique imprint of dark matter substructure on extended arcs in strong lensing systems contains a wealth of information about the properties and distribution of dark matter on small scales and, consequently, about the underlying particle physics. However, teasing out this effect poses a significant challenge since the likelihood function for realistic simulations of population-level parameters is intractable. We apply recently-developed simulation-based inference techniques to the problem of substructure inference in galaxy-galaxy strong lenses. By leveraging additional information extracted from the simulator, neural networks are efficiently trained to estimate likelihood ratios associated with population-level parameters characterizing substructure. Through proof-of-principle application to simulated data, we show that these methods can provide an efficient and principled way to simultaneously analyze an ensemble of strong lenses, and can be used to mine the large sample of lensing images deliverable by near-future surveys for signatures of dark matter substructure.
Sep 04, 2019

Johann Brehmer, Siddharth Mishra-Sharma, Joeri Hermans, Gilles Louppe, Kyle Cranmer

* 22 pages, 6 figures, code available at https://github.com/smsharma/mining-for-substructure-lens

**Click to Read Paper and Get Code**

Effective LHC measurements with matrix elements and machine learning

Jun 04, 2019

Johann Brehmer, Kyle Cranmer, Irina Espejo, Felix Kling, Gilles Louppe, Juan Pavez

Jun 04, 2019

Johann Brehmer, Kyle Cranmer, Irina Espejo, Felix Kling, Gilles Louppe, Juan Pavez

* Keynote at the 19th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2019)

**Click to Read Paper and Get Code**

Context-dependent feature analysis with random forests

May 12, 2016

Antonio Sutera, Gilles Louppe, Vân Anh Huynh-Thu, Louis Wehenkel, Pierre Geurts

May 12, 2016

Antonio Sutera, Gilles Louppe, Vân Anh Huynh-Thu, Louis Wehenkel, Pierre Geurts

* Accepted for presentation at UAI 2016

**Click to Read Paper and Get Code**