Models, code, and papers for "George E":

Sampling-based Roadmap Planners are Probably Near-Optimal after Finite Computation

Apr 08, 2014
Andrew Dobson, George V. Moustakides, Kostas E. Bekris

Sampling-based motion planners have proven to be efficient solutions to a variety of high-dimensional, geometrically complex motion planning problems with applications in several domains. The traditional view of these approaches is that they solve challenges efficiently by giving up formal guarantees and instead attain asymptotic properties in terms of completeness and optimality. Recent work has argued based on Monte Carlo experiments that these approaches also exhibit desirable probabilistic properties in terms of completeness and optimality after finite computation. The current paper formalizes these guarantees. It proves a formal bound on the probability that solutions returned by asymptotically optimal roadmap-based methods (e.g., PRM*) are within a bound of the optimal path length I* with clearance {\epsilon} after a finite iteration n. This bound has the form P(|In - I* | {\leq} {\delta}I*) {\leq} Psuccess, where {\delta} is an error term for the length a path in the PRM* graph, In. This bound is proven for general dimension Euclidean spaces and evaluated in simulation. A discussion on how this bound can be used in practice, as well as bounds for sparse roadmaps are also provided.

* 17 pages, 5 figures, 3 tables. Submitted to the Eleventh International Workshop on the Algorithmic Foundations of Robotics (WAFR 2014) 

  Click for Model/Code and Paper
Deep Learning for Real-time Gravitational Wave Detection and Parameter Estimation with LIGO Data

Dec 11, 2017
Daniel George, E. A. Huerta

The recent Nobel-prize-winning detections of gravitational waves from merging black holes and the subsequent detection of the collision of two neutron stars in coincidence with electromagnetic observations have inaugurated a new era of multimessenger astrophysics. To enhance the scope of this emergent science, we proposed the use of deep convolutional neural networks for the detection and characterization of gravitational wave signals in real-time. This method, Deep Filtering, was initially demonstrated using simulated LIGO noise. In this article, we present the extension of Deep Filtering using real data from the first observing run of LIGO, for both detection and parameter estimation of gravitational waves from binary black hole mergers with continuous data streams from multiple LIGO detectors. We show for the first time that machine learning can detect and estimate the true parameters of a real GW event observed by LIGO. Our comparisons show that Deep Filtering is far more computationally efficient than matched-filtering, while retaining similar sensitivity and lower errors, allowing real-time processing of weak time-series signals in non-stationary non-Gaussian noise, with minimal resources, and also enables the detection of new classes of gravitational wave sources that may go unnoticed with existing detection algorithms. This approach is uniquely suited to enable coincident detection campaigns of gravitational waves and their multimessenger counterparts in real-time.

* Camera-ready (final) version accepted to NIPS 2017 conference workshop on Deep Learning for Physical Sciences and selected for contributed talk. Also awarded 1st place at ACM SRC at SC17. Extended article: arXiv:1711.03121 

  Click for Model/Code and Paper
Deep Neural Networks to Enable Real-time Multimessenger Astrophysics

Nov 09, 2017
Daniel George, E. A. Huerta

Gravitational wave astronomy has set in motion a scientific revolution. To further enhance the science reach of this emergent field, there is a pressing need to increase the depth and speed of the gravitational wave algorithms that have enabled these groundbreaking discoveries. To contribute to this effort, we introduce Deep Filtering, a new highly scalable method for end-to-end time-series signal processing, based on a system of two deep convolutional neural networks, which we designed for classification and regression to rapidly detect and estimate parameters of signals in highly noisy time-series data streams. We demonstrate a novel training scheme with gradually increasing noise levels, and a transfer learning procedure between the two networks. We showcase the application of this method for the detection and parameter estimation of gravitational waves from binary black hole mergers. Our results indicate that Deep Filtering significantly outperforms conventional machine learning techniques, achieves similar performance compared to matched-filtering while being several orders of magnitude faster thus allowing real-time processing of raw big data with minimal resources. More importantly, Deep Filtering extends the range of gravitational wave signals that can be detected with ground-based gravitational wave detectors. This framework leverages recent advances in artificial intelligence algorithms and emerging hardware architectures, such as deep-learning-optimized GPUs, to facilitate real-time searches of gravitational wave sources and their electromagnetic and astro-particle counterparts.

* Phys. Rev. D 97, 044039 (2018) 
* v3: Added results submitted to PRD on October 18, 2017; incorporated suggestions from the community 

  Click for Model/Code and Paper
Deep Learning for Real-time Gravitational Wave Detection and Parameter Estimation: Results with Advanced LIGO Data

Nov 08, 2017
Daniel George, E. A. Huerta

The recent Nobel-prize-winning detections of gravitational waves from merging black holes and the subsequent detection of the collision of two neutron stars in coincidence with electromagnetic observations have inaugurated a new era of multimessenger astrophysics. To enhance the scope of this emergent field of science, we pioneered the use of deep learning with convolutional neural networks, that take time-series inputs, for rapid detection and characterization of gravitational wave signals. This approach, Deep Filtering, was initially demonstrated using simulated LIGO noise. In this article, we present the extension of Deep Filtering using real data from LIGO, for both detection and parameter estimation of gravitational waves from binary black hole mergers using continuous data streams from multiple LIGO detectors. We demonstrate for the first time that machine learning can detect and estimate the true parameters of real events observed by LIGO. Our results show that Deep Filtering achieves similar sensitivities and lower errors compared to matched-filtering while being far more computationally efficient and more resilient to glitches, allowing real-time processing of weak time-series signals in non-stationary non-Gaussian noise with minimal resources, and also enables the detection of new classes of gravitational wave sources that may go unnoticed with existing detection algorithms. This unified framework for data analysis is ideally suited to enable coincident detection campaigns of gravitational waves and their multimessenger counterparts in real-time.

* Physics Letters B, 778 (2018) 64-70 
* 6 pages, 7 figures; First application of deep learning to real LIGO events; Includes direct comparison against matched-filtering 

  Click for Model/Code and Paper
On Empirical Comparisons of Optimizers for Deep Learning

Oct 11, 2019
Dami Choi, Christopher J. Shallue, Zachary Nado, Jaehoon Lee, Chris J. Maddison, George E. Dahl

Selecting an optimizer is a central step in the contemporary deep learning pipeline. In this paper, we demonstrate the sensitivity of optimizer comparisons to the metaparameter tuning protocol. Our findings suggest that the metaparameter search space may be the single most important factor explaining the rankings obtained by recent empirical comparisons in the literature. In fact, we show that these results can be contradicted when metaparameter search spaces are changed. As tuning effort grows without bound, more general optimizers should never underperform the ones they can approximate (i.e., Adam should never perform worse than momentum), but recent attempts to compare optimizers either assume these inclusion relationships are not practically relevant or restrict the metaparameters in ways that break the inclusions. In our experiments, we find that inclusion relationships between optimizers matter in practice and always predict optimizer comparisons. In particular, we find that the popular adaptive gradient methods never underperform momentum or gradient descent. We also report practical tips around tuning often ignored metaparameters of adaptive gradient methods and raise concerns about fairly benchmarking optimizers for neural network training.


  Click for Model/Code and Paper
A unified spectra analysis workflow for the assessment of microbial contamination of ready to eat green salads: Comparative study and application of non-invasive sensors

Mar 26, 2019
Panagiotis Tsakanikas, Lemonia Christina Fengou, Evanthia Manthou, Alexandra Lianou, Efstathios Z. Panagou, George John E. Nychas

The present study provides a comparative assessment of non-invasive sensors as means of estimating the microbial contamination and time-on-shelf (i.e. storage time) of leafy green vegetables, using a novel unified spectra analysis workflow. Two fresh ready-to-eat green salads were used in the context of this study for the purpose of evaluating the efficiency and practical application of the presented workflow: rocket and baby spinach salads. The employed analysis workflow consisted of robust data normalization, powerful feature selection based on random forests regression, and selection of the number of partial least squares regression coefficients in the training process by estimating the knee-point on the explained variance plot. Training processes were based on microbiological and spectral data derived during storage of green salad samples at isothermal conditions (4, 8 and 12C), whereas testing was performed on data during storage under dynamic temperature conditions (simulating real-life temperature fluctuations in the food supply chain). Since an increasing interest in the use of non-invasive sensors in food quality assessment has been made evident in recent years, the unified spectra analysis workflow described herein, by being based on the creation/usage of limited sized featured sets, could be very useful in food-specific low-cost sensor development.

* Computers and Electronics in Agriculture, 2018 

  Click for Model/Code and Paper
Glitch Classification and Clustering for LIGO with Deep Transfer Learning

Dec 11, 2017
Daniel George, Hongyu Shen, E. A. Huerta

The detection of gravitational waves with LIGO and Virgo requires a detailed understanding of the response of these instruments in the presence of environmental and instrumental noise. Of particular interest is the study of anomalous non-Gaussian noise transients known as glitches, since their high occurrence rate in LIGO/Virgo data can obscure or even mimic true gravitational wave signals. Therefore, successfully identifying and excising glitches is of utmost importance to detect and characterize gravitational waves. In this article, we present the first application of Deep Learning combined with Transfer Learning for glitch classification, using real data from LIGO's first discovery campaign labeled by Gravity Spy, showing that knowledge from pre-trained models for real-world object recognition can be transferred for classifying spectrograms of glitches. We demonstrate that this method enables the optimal use of very deep convolutional neural networks for glitch classification given small unbalanced training datasets, significantly reduces the training time, and achieves state-of-the-art accuracy above 98.8%. Once trained via transfer learning, we show that the networks can be truncated and used as feature extractors for unsupervised clustering to automatically group together new classes of glitches and anomalies. This novel capability is of critical importance to identify and remove new types of glitches which will occur as the LIGO/Virgo detectors gradually attain design sensitivity.

* Phys. Rev. D 97, 101501 (2018) 
* Camera-ready (final) paper accepted to NIPS 2017 conference workshop on Deep Learning for Physical Sciences. Extended article: arXiv:1706.07446 

  Click for Model/Code and Paper
Deep Transfer Learning: A new deep learning glitch classification method for advanced LIGO

Jun 22, 2017
Daniel George, Hongyu Shen, E. A. Huerta

The exquisite sensitivity of the advanced LIGO detectors has enabled the detection of multiple gravitational wave signals. The sophisticated design of these detectors mitigates the effect of most types of noise. However, advanced LIGO data streams are contaminated by numerous artifacts known as glitches: non-Gaussian noise transients with complex morphologies. Given their high rate of occurrence, glitches can lead to false coincident detections, obscure and even mimic gravitational wave signals. Therefore, successfully characterizing and removing glitches from advanced LIGO data is of utmost importance. Here, we present the first application of Deep Transfer Learning for glitch classification, showing that knowledge from deep learning algorithms trained for real-world object recognition can be transferred for classifying glitches in time-series based on their spectrogram images. Using the Gravity Spy dataset, containing hand-labeled, multi-duration spectrograms obtained from real LIGO data, we demonstrate that this method enables optimal use of very deep convolutional neural networks for classification given small training datasets, significantly reduces the time for training the networks, and achieves state-of-the-art accuracy above 98.8%, with perfect precision-recall on 8 out of 22 classes. Furthermore, new types of glitches can be classified accurately given few labeled examples with this technique. Once trained via transfer learning, we show that the convolutional neural networks can be truncated and used as excellent feature extractors for unsupervised clustering methods to identify new classes based on their morphology, without any labeled examples. Therefore, this provides a new framework for dynamic glitch classification for gravitational wave detectors, which are expected to encounter new types of noise as they undergo gradual improvements to attain design sensitivity.


  Click for Model/Code and Paper
Context Exploitation using Hierarchical Bayesian Models

May 30, 2018
Christopher A. George, Pranab Banerjee, Kendra E. Moore

We consider the problem of how to improve automatic target recognition by fusing the naive sensor-level classification decisions with "intuition," or context, in a mathematically principled way. This is a general approach that is compatible with many definitions of context, but for specificity, we consider context as co-occurrence in imagery. In particular, we consider images that contain multiple objects identified at various confidence levels. We learn the patterns of co-occurrence in each context, then use these patterns as hyper-parameters for a Hierarchical Bayesian Model. The result is that low-confidence sensor classification decisions can be dramatically improved by fusing those readings with context. We further use hyperpriors to address the case where multiple contexts may be appropriate. We also consider the Bayesian Network, an alternative to the Hierarchical Bayesian Model, which is computationally more efficient but assumes that context and sensor readings are uncorrelated.

* Proceedings of the National Fire Control Symposium, February 2018 
* 4 pages; 3 figures; 5 tables 

  Click for Model/Code and Paper
Convolutional Networks for Image Processing by Coupled Oscillator Arrays

Sep 15, 2014
Dmitri E. Nikonov, Ian A. Young, George I. Bourianoff

A coupled oscillator array is shown to approximate convolutions with Gabor filters for image processing tasks. Pixelated image fragments and filter functions are converted to voltages, differenced, and input into a corresponding array of weakly coupled Voltage Controlled Oscillators (VCOs). This is referred to as Frequency Shift Keying (FSK). Upon synchronization of the array, the common node amplitude provides a metric for the degree of match between the image fragment and the filter function. The optimal oscillator parameters for synchronization are determined and favor a moderate value of the Q-factor.

* 23 pages, 12 figures 

  Click for Model/Code and Paper
Multi-task Neural Networks for QSAR Predictions

Jun 04, 2014
George E. Dahl, Navdeep Jaitly, Ruslan Salakhutdinov

Although artificial neural networks have occasionally been used for Quantitative Structure-Activity/Property Relationship (QSAR/QSPR) studies in the past, the literature has of late been dominated by other machine learning techniques such as random forests. However, a variety of new neural net techniques along with successful applications in other domains have renewed interest in network approaches. In this work, inspired by the winning team's use of neural networks in a recent QSAR competition, we used an artificial neural network to learn a function that predicts activities of compounds for multiple assays at the same time. We conducted experiments leveraging recent methods for dealing with overfitting in neural networks as well as other tricks from the neural networks literature. We compared our methods to alternative methods reported to perform well on these tasks and found that our neural net methods provided superior performance.


  Click for Model/Code and Paper
BART: Bayesian additive regression trees

Oct 07, 2010
Hugh A. Chipman, Edward I. George, Robert E. McCulloch

We develop a Bayesian "sum-of-trees" model where each tree is constrained by a regularization prior to be a weak learner, and fitting and inference are accomplished via an iterative Bayesian backfitting MCMC algorithm that generates samples from a posterior. Effectively, BART is a nonparametric Bayesian regression approach which uses dimensionally adaptive random basis elements. Motivated by ensemble methods in general, and boosting algorithms in particular, BART is defined by a statistical model: a prior and a likelihood. This approach enables full posterior inference including point and interval estimates of the unknown regression function as well as the marginal effects of potential predictors. By keeping track of predictor inclusion frequencies, BART can also be used for model-free variable selection. BART's many features are illustrated with a bake-off against competing methods on 42 different data sets, with a simulation experiment and on a drug discovery classification problem.

* Annals of Applied Statistics 2010, Vol. 4, No. 1, 266-298 
* Published in at http://dx.doi.org/10.1214/09-AOAS285 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org) 

  Click for Model/Code and Paper
DeepXDE: A deep learning library for solving differential equations

Jul 10, 2019
Lu Lu, Xuhui Meng, Zhiping Mao, George E. Karniadakis

Deep learning has achieved remarkable success in diverse applications; however, its use in solving partial differential equations (PDEs) has emerged only recently. Here, we present an overview of physics-informed neural networks (PINNs), which embed a PDE into the loss of the neural network using automatic differentiation. The PINN algorithm is simple, and it can be applied to different types of PDEs, including integro-differential equations, fractional PDEs, and stochastic PDEs. Moreover, PINNs solve inverse problems as easily as forward problems. We propose a new residual-based adaptive refinement (RAR) method to improve the training efficiency of PINNs. For pedagogical reasons, we compare the PINN algorithm to a standard finite element method. We also present a Python library for PINNs, DeepXDE, which is designed to serve both as an education tool to be used in the classroom as well as a research tool for solving problems in computational science and engineering. DeepXDE supports complex-geometry domains based on the technique of constructive solid geometry, and enables the user code to be compact, resembling closely the mathematical formulation. We introduce the usage of DeepXDE and its customizability, and we also demonstrate the capability of PINNs and the user-friendliness of DeepXDE for five different examples. More broadly, DeepXDE contributes to the more rapid development of the emerging Scientific Machine Learning field.


  Click for Model/Code and Paper
Denoising Gravitational Waves with Enhanced Deep Recurrent Denoising Auto-Encoders

Mar 06, 2019
Hongyu Shen, Daniel George, E. A. Huerta, Zhizhen Zhao

Denoising of time domain data is a crucial task for many applications such as communication, translation, virtual assistants etc. For this task, a combination of a recurrent neural net (RNNs) with a Denoising Auto-Encoder (DAEs) has shown promising results. However, this combined model is challenged when operating with low signal-to-noise ratio (SNR) data embedded in non-Gaussian and non-stationary noise. To address this issue, we design a novel model, referred to as 'Enhanced Deep Recurrent Denoising Auto-Encoder' (EDRDAE), that incorporates a signal amplifier layer, and applies curriculum learning by first denoising high SNR signals, before gradually decreasing the SNR until the signals become noise dominated. We showcase the performance of EDRDAE using time-series data that describes gravitational waves embedded in very noisy backgrounds. In addition, we show that EDRDAE can accurately denoise signals whose topology is significantly more complex than those used for training, demonstrating that our model generalizes to new classes of gravitational waves that are beyond the scope of established denoising algorithms.

* 5 pages, 11 figures and 3 tables, accepted to ICASSP 2019 

  Click for Model/Code and Paper
Real-time regression analysis with deep convolutional neural networks

May 07, 2018
E. A. Huerta, Daniel George, Zhizhen Zhao, Gabrielle Allen

We discuss the development of novel deep learning algorithms to enable real-time regression analysis for time series data. We showcase the application of this new method with a timely case study, and then discuss the applicability of this approach to tackle similar challenges across science domains.

* 3 pages. Position Paper accepted to SciML2018: DOE ASCR Workshop on Scientific Machine Learning. North Bethesda, MD, United States, January 30-February 1, 2018 

  Click for Model/Code and Paper
Denoising Gravitational Waves using Deep Learning with Recurrent Denoising Autoencoders

Nov 27, 2017
Hongyu Shen, Daniel George, E. A. Huerta, Zhizhen Zhao

Gravitational wave astronomy is a rapidly growing field of modern astrophysics, with observations being made frequently by the LIGO detectors. Gravitational wave signals are often extremely weak and the data from the detectors, such as LIGO, is contaminated with non-Gaussian and non-stationary noise, often containing transient disturbances which can obscure real signals. Traditional denoising methods, such as principal component analysis and dictionary learning, are not optimal for dealing with this non-Gaussian noise, especially for low signal-to-noise ratio gravitational wave signals. Furthermore, these methods are computationally expensive on large datasets. To overcome these issues, we apply state-of-the-art signal processing techniques, based on recent groundbreaking advancements in deep learning, to denoise gravitational wave signals embedded either in Gaussian noise or in real LIGO noise. We introduce SMTDAE, a Staired Multi-Timestep Denoising Autoencoder, based on sequence-to-sequence bi-directional Long-Short-Term-Memory recurrent neural networks. We demonstrate the advantages of using our unsupervised deep learning approach and show that, after training only using simulated Gaussian noise, SMTDAE achieves superior recovery performance for gravitational wave signals embedded in real non-Gaussian LIGO noise.

* 5 pages, 2 figures 

  Click for Model/Code and Paper
Incorporating Side Information in Probabilistic Matrix Factorization with Gaussian Processes

Aug 09, 2014
Ryan Prescott Adams, George E. Dahl, Iain Murray

Probabilistic matrix factorization (PMF) is a powerful method for modeling data associ- ated with pairwise relationships, Finding use in collaborative Filtering, computational bi- ology, and document analysis, among other areas. In many domains, there are additional covariates that can assist in prediction. For example, when modeling movie ratings, we might know when the rating occurred, where the user lives, or what actors appear in the movie. It is difficult, however, to incorporate this side information into the PMF model. We propose a framework for incorporating side information by coupling together multi- ple PMF problems via Gaussian process priors. We replace scalar latent features with func- tions that vary over the covariate space. The GP priors on these functions require them to vary smoothly and share information. We apply this new method to predict the scores of professional basketball games, where side information about the venue and date of the game are relevant for the outcome.

* Appears in Proceedings of the Twenty-Sixth Conference on Uncertainty in Artificial Intelligence (UAI2010) 

  Click for Model/Code and Paper
Training Restricted Boltzmann Machines on Word Observations

Jul 05, 2012
George E. Dahl, Ryan P. Adams, Hugo Larochelle

The restricted Boltzmann machine (RBM) is a flexible tool for modeling complex data, however there have been significant computational difficulties in using RBMs to model high-dimensional multinomial observations. In natural language processing applications, words are naturally modeled by K-ary discrete distributions, where K is determined by the vocabulary size and can easily be in the hundreds of thousands. The conventional approach to training RBMs on word observations is limited because it requires sampling the states of K-way softmax visible units during block Gibbs updates, an operation that takes time linear in K. In this work, we address this issue by employing a more general class of Markov chain Monte Carlo operators on the visible units, yielding updates with computational complexity independent of K. We demonstrate the success of our approach by training RBMs on hundreds of millions of word n-grams using larger vocabularies than previously feasible and using the learned features to improve performance on chunking and sentiment classification tasks, achieving state-of-the-art results on the latter.


  Click for Model/Code and Paper
Unbiased Smoothing using Particle Independent Metropolis-Hastings

Feb 05, 2019
Lawrence Middleton, George Deligiannidis, Arnaud Doucet, Pierre E. Jacob

We consider the approximation of expectations with respect to the distribution of a latent Markov process given noisy measurements. This is known as the smoothing problem and is often approached with particle and Markov chain Monte Carlo (MCMC) methods. These methods provide consistent but biased estimators when run for a finite time. We propose a simple way of coupling two MCMC chains built using Particle Independent Metropolis-Hastings (PIMH) to produce unbiased smoothing estimators. Unbiased estimators are appealing in the context of parallel computing, and facilitate the construction of confidence intervals. The proposed scheme only requires access to off-the-shelf Particle Filters (PF) and is thus easier to implement than recently proposed unbiased smoothers. The approach is demonstrated on a L\'evy-driven stochastic volatility model and a stochastic kinetic model.

* 13 pages 

  Click for Model/Code and Paper