Models, code, and papers for "Lisa A":

Robotic Search & Rescue via Online Multi-task Reinforcement Learning

Nov 29, 2015
Lisa Lee

Reinforcement learning (RL) is a general and well-known method that a robot can use to learn an optimal control policy to solve a particular task. We would like to build a versatile robot that can learn multiple tasks, but using RL for each of them would be prohibitively expensive in terms of both time and wear-and-tear on the robot. To remedy this problem, we use the Policy Gradient Efficient Lifelong Learning Algorithm (PG-ELLA), an online multi-task RL algorithm that enables the robot to efficiently learn multiple consecutive tasks by sharing knowledge between these tasks to accelerate learning and improve performance. We implemented and evaluated three RL methods--Q-learning, policy gradient RL, and PG-ELLA--on a ground robot whose task is to find a target object in an environment under different surface conditions. In this paper, we discuss our implementations as well as present an empirical analysis of their learning performance.

* 7 pages 

  Access Model/Code and Paper
The Fluidity of Concept Representations in Human Brain Signals

Feb 20, 2020
Eva Hendrikx, Lisa Beinborn

Cognitive theories of human language processing often distinguish between concrete and abstract concepts. In this work, we analyze the discriminability of concrete and abstract concepts in fMRI data using a range of analysis methods. We find that the distinction can be decoded from the signal with an accuracy significantly above chance, but it is not found to be a relevant structuring factor in clustering and relational analyses. From our detailed comparison, we obtain the impression that human concept representations are more fluid than dichotomous categories can capture. We argue that fluid concept representations lead to more realistic models of human language processing because they better capture the ambiguity and underspecification present in natural language use.

* 12 pages, 5 figures, 1 table 

  Access Model/Code and Paper
AMR-to-Text Generation with Cache Transition Systems

Dec 03, 2019
Lisa Jin, Daniel Gildea

Text generation from AMR involves emitting sentences that reflect the meaning of their AMR annotations. Neural sequence-to-sequence models have successfully been used to decode strings from flattened graphs (e.g., using depth-first or random traversal). Such models often rely on attention-based decoders to map AMR node to English token sequences. Instead of linearizing AMR, we directly encode its graph structure and delegate traversal to the decoder. To enforce a sentence-aligned graph traversal and provide local graph context, we predict transition-based parser actions in addition to English words. We present two model variants: one generates parser actions prior to words, while the other interleaves actions with words.


  Access Model/Code and Paper
QFlip: An Adaptive Reinforcement Learning Strategy for the FlipIt Security Game

Aug 14, 2019
Lisa Oakley, Alina Oprea

A rise in Advanced Persistent Threats (APTs) has introduced a need for robustness against long-running, stealthy attacks which circumvent existing cryptographic security guarantees. FlipIt is a security game that models attacker-defender interactions in advanced scenarios such as APTs. Previous work analyzed extensively non-adaptive strategies in FlipIt, but adaptive strategies rise naturally in practical interactions as players receive feedback during the game. We model the FlipIt game as a Markov Decision Process and introduce QFlip, an adaptive strategy for FlipIt based on temporal difference reinforcement learning. We prove theoretical results on the convergence of our new strategy against an opponent playing with a Periodic strategy. We confirm our analysis experimentally by extensive evaluation of QFlip against specific opponents. QFlip converges to the optimal adaptive strategy for Periodic and Exponential opponents using associated state spaces. Finally, we introduce a generalized QFlip strategy with composite state space that outperforms a Greedy strategy for several distributions including Periodic and Uniform, without prior knowledge of the opponent's strategy. We also release an OpenAI Gym environment for FlipIt to facilitate future research.

* will appear in 10th Conference on Decision and Game Theory in Security 

  Access Model/Code and Paper
Playing Adaptively Against Stealthy Opponents: A Reinforcement Learning Strategy for the FlipIt Security Game

Jun 27, 2019
Lisa Oakley, Alina Oprea

A rise in Advanced Persistant Threats (APTs) has introduced a need for robustness against long-running, stealthy attacks which circumvent existing cryptographic security guarantees. FlipIt is a security game that models the attacker-defender interactions in advanced scenarios such as APTs. Previous work analyzed extensively non-adaptive strategies in FlipIt, but adaptive strategies rise naturally in practical interactions as players receive feedback during the game. We model the FlipIt game as a Markov Decision Process and use reinforcement learning algorithms to design adaptive strategies. We prove theoretical results on the convergence of our new strategy against an opponent playing with a Periodic strategy. We confirm our analysis experimentally by extensive evaluation of the strategy against specific opponents. Our strategies converge to the optimal adaptive strategy for Periodic and Exponential opponents. Finally, we introduce a generalized Q-Learning strategy with composite states that outperforms a Greedy-based strategy for several distributions, including Periodic and Uniform, without prior knowledge of the opponent's strategy.


  Access Model/Code and Paper
Semantic Drift in Multilingual Representations

May 02, 2019
Lisa Beinborn, Rochelle Choenni

Multilingual representations have mostly been evaluated based on their performance on specific tasks. In this article, we look beyond engineering goals and analyze the relations between languages in computational representations. We introduce a methodology for comparing languages based on their organization of semantic concepts. We propose to conduct an adapted version of representational similarity analysis of a selected set of concepts in computational multilingual representations. Using this analysis method, we can reconstruct a phylogenetic tree that closely resembles those assumed by linguistic experts. These results indicate that multilingual distributional representations which are only trained on monolingual text and bilingual dictionaries preserve relations between languages without the need for any etymological information. In addition, we propose a measure to identify semantic drift between language families. We perform experiments on word-based and sentence-based multilingual models and provide both quantitative results and qualitative examples. Analyses of semantic drift in multilingual representations can serve two purposes: they can indicate unwanted characteristics of the computational models and they provide a quantitative means to study linguistic phenomena across languages. The code is available at https://github.com/beinborn/SemanticDrift.


  Access Model/Code and Paper
On the Linear Algebraic Structure of Distributed Word Representations

Nov 22, 2015
Lisa Seung-Yeon Lee

In this work, we leverage the linear algebraic structure of distributed word representations to automatically extend knowledge bases and allow a machine to learn new facts about the world. Our goal is to extract structured facts from corpora in a simpler manner, without applying classifiers or patterns, and using only the co-occurrence statistics of words. We demonstrate that the linear algebraic structure of word embeddings can be used to reduce data requirements for methods of learning facts. In particular, we demonstrate that words belonging to a common category, or pairs of words satisfying a certain relation, form a low-rank subspace in the projected space. We compute a basis for this low-rank subspace using singular value decomposition (SVD), then use this basis to discover new facts and to fit vectors for less frequent words which we do not yet have vectors for.

* 55 pages 

  Access Model/Code and Paper
Specializing Word Embeddings (for Parsing) by Information Bottleneck

Oct 01, 2019
Xiang Lisa Li, Jason Eisner

Pre-trained word embeddings like ELMo and BERT contain rich syntactic and semantic information, resulting in state-of-the-art performance on various tasks. We propose a very fast variational information bottleneck (VIB) method to nonlinearly compress these embeddings, keeping only the information that helps a discriminative parser. We compress each word embedding to either a discrete tag or a continuous vector. In the discrete version, our automatically compressed tags form an alternative tag set: we show experimentally that our tags capture most of the information in traditional POS tag annotations, but our tag sequences can be parsed more accurately at the same level of tag granularity. In the continuous version, we show experimentally that moderately compressing the word embeddings by our method yields a more accurate parser in 8 of 9 languages, unlike simple dimensionality reduction.

* Accepted for publication at EMNLP 2019 

  Access Model/Code and Paper
Blind Construction of Optimal Nonlinear Recursive Predictors for Discrete Sequences

Aug 09, 2014
Cosma Shalizi, Kristina Lisa Klinkner

We present a new method for nonlinear prediction of discrete random sequences under minimal structural assumptions. We give a mathematical construction for optimal predictors of such processes, in the form of hidden Markov models. We then describe an algorithm, CSSR (Causal-State Splitting Reconstruction), which approximates the ideal predictor from data. We discuss the reliability of CSSR, its data requirements, and its performance in simulations. Finally, we compare our approach to existing methods using variablelength Markov models and cross-validated hidden Markov models, and show theoretically and experimentally that our method delivers results superior to the former and at least comparable to the latter.

* Appears in Proceedings of the Twentieth Conference on Uncertainty in Artificial Intelligence (UAI2004) 

  Access Model/Code and Paper
A Synthesis of Logical and Probabilistic Reasoning for Program Understanding and Debugging

Mar 06, 2013
Lisa J. Burnell, Eric J. Horvitz

We describe the integration of logical and uncertain reasoning methods to identify the likely source and location of software problems. To date, software engineers have had few tools for identifying the sources of error in complex software packages. We describe a method for diagnosing software problems through combining logical and uncertain reasoning analyses. Our preliminary results suggest that such methods can be of value in directing the attention of software engineers to paths of an algorithm that have the highest likelihood of harboring a programming error.

* Appears in Proceedings of the Ninth Conference on Uncertainty in Artificial Intelligence (UAI1993) 

  Access Model/Code and Paper
PatchPerPix for Instance Segmentation

Jan 21, 2020
Peter Hirsch, Lisa Mais, Dagmar Kainmueller

In this paper we present a novel method for proposal free instance segmentation that can handle sophisticated object shapes that span large parts of an image and form dense object clusters with crossovers. Our method is based on predicting dense local shape descriptors, which we assemble to form instances. All instances are assembled simultaneously in one go. To our knowledge, our method is the first non-iterative method that guarantees instances to be composed of learnt shape patches. We evaluate our method on a variety of data domains, where it defines the new state of the art on two challenging benchmarks, namely the ISBI 2012 EM segmentation benchmark, and the BBBC010 C. elegans dataset. We show furthermore that our method performs well also on 3d image data, and can handle even extreme cases of complex shape clusters.


  Access Model/Code and Paper
Jointly Learning to Detect Emotions and Predict Facebook Reactions

Sep 24, 2019
Lisa Graziani, Stefano Melacci, Marco Gori

The growing ubiquity of Social Media data offers an attractive perspective for improving the quality of machine learning-based models in several fields, ranging from Computer Vision to Natural Language Processing. In this paper we focus on Facebook posts paired with reactions of multiple users, and we investigate their relationships with classes of emotions that are typically considered in the task of emotion detection. We are inspired by the idea of introducing a connection between reactions and emotions by means of First-Order Logic formulas, and we propose an end-to-end neural model that is able to jointly learn to detect emotions and predict Facebook reactions in a multi-task environment, where the logic formulas are converted into polynomial constraints. Our model is trained using a large collection of unsupervised texts together with data labeled with emotion classes and Facebook posts that include reactions. An extended experimental analysis that leverages a large collection of Facebook posts shows that the tasks of emotion classification and reaction prediction can both benefit from their interaction.

* International Conference on Artificial Neural Networks. Springer, Cham, 2019 

  Access Model/Code and Paper
A Survey of Data Quality Measurement and Monitoring Tools

Jul 18, 2019
Lisa Ehrlinger, Elisa Rusz, Wolfram Wâß

High-quality data is key to interpretable and trustworthy data analytics and the basis for meaningful data-driven decisions. In practical scenarios, data quality is typically associated with data preprocessing, profiling, and cleansing for subsequent tasks like data integration or data analytics. However, from a scientific perspective, a lot of research has been published about the measurement (i.e., the detection) of data quality issues and different generally applicable data quality dimensions and metrics have been discussed. In this work, we close the gap between research into data quality measurement and practical implementations by investigating the functional scope of current data quality tools. With a systematic search, we identified 667 software tools dedicated to "data quality", from which we evaluated 13 tools with respect to three functionality areas: (1) data profiling, (2) data quality measurement in terms of metrics, and (3) continuous data quality monitoring. We selected the evaluated tools with regard to pre-defined exclusion criteria to ensure that they are domain-independent, provide the investigated functions, and are evaluable freely or as trial. This survey aims at a comprehensive overview on state-of-the-art data quality tools and reveals potential for their functional enhancement. Additionally, the results allow a critical discussion on concepts, which are widely accepted in research, but hardly implemented in any tool observed, for example, generally applicable data quality metrics.

* 35 pages, 1 figure 

  Access Model/Code and Paper
Online Matrix Completion with Side Information

Jun 17, 2019
Mark Herbster, Stephen Pasteris, Lisa Tse

We give an online algorithm and prove novel mistake and regret bounds for online binary matrix completion with side information. The bounds we prove are of the form $\tilde{\mathcal{O}}({\mathcal{D}}/{\gamma^2})$. The term ${1}/{\gamma^2}$ is analogous to the usual margin term in SVM (perceptron) bounds. More specifically, if we assume that there is some factorization of the underlying $m\times n$ matrix into $\mathbf{P} \mathbf{Q}^{\top}$ where the rows of $\mathbf{P}$ are interpreted as ``classifiers'' in $\Re^d$ and the rows of $\mathbf{Q}$ as ``instances'' in $\Re^d$, then $\gamma$ is is the maximum (normalized) margin over all factorizations $\mathbf{P} \mathbf{Q}^{\top}$ consistent with the observed matrix. The quasi-dimension term $\mathcal{D}$ measures the quality of side information. In the presence of no side information, $\mathcal{D} = m+n$. However, if the side information is predictive of the underlying factorization of the matrix, then in the best case, $\mathcal{D} \in \mathcal{O}(k + \ell)$ where $k$ is the number of distinct row factors and $\ell$ is the number of distinct column factors. We additionally provide a generalization of our algorithm to the inductive setting. In this setting, the side information is not specified in advance. The results are similar to the transductive setting but in the best case, the quasi-dimension $\mathcal{D}$ is now bounded by $\mathcal{O}(k^2 + \ell^2)$.


  Access Model/Code and Paper
Robust Evaluation of Language-Brain Encoding Experiments

Apr 04, 2019
Lisa Beinborn, Samira Abnar, Rochelle Choenni

Language-brain encoding experiments evaluate the ability of language models to predict brain responses elicited by language stimuli. The evaluation scenarios for this task have not yet been standardized which makes it difficult to compare and interpret results. We perform a series of evaluation experiments with a consistent encoding setup and compute the results for multiple fMRI datasets. In addition, we test the sensitivity of the evaluation measures to randomized data and analyze the effect of voxel selection methods. Our experimental framework is publicly available to make modelling decisions more transparent and support reproducibility for future comparisons.


  Access Model/Code and Paper
Neural Vector Conceptualization for Word Vector Space Interpretation

Apr 02, 2019
Robert Schwarzenberg, Lisa Raithel, David Harbecke

Distributed word vector spaces are considered hard to interpret which hinders the understanding of natural language processing (NLP) models. In this work, we introduce a new method to interpret arbitrary samples from a word vector space. To this end, we train a neural model to conceptualize word vectors, which means that it activates higher order concepts it recognizes in a given vector. Contrary to prior approaches, our model operates in the original vector space and is capable of learning non-linear relations between word vectors and concepts. Furthermore, we show that it produces considerably less entropic concept activation profiles than the popular cosine similarity.

* NAACL-HLT 2019 Workshop on Evaluating Vector Space Representations for NLP (RepEval) 

  Access Model/Code and Paper
Coherence Constraints in Facial Expression Recognition

Oct 17, 2018
Lisa Graziani, Stefano Melacci, Marco Gori

Recognizing facial expressions from static images or video sequences is a widely studied but still challenging problem. The recent progresses obtained by deep neural architectures, or by ensembles of heterogeneous models, have shown that integrating multiple input representations leads to state-of-the-art results. In particular, the appearance and the shape of the input face, or the representations of some face parts, are commonly used to boost the quality of the recognizer. This paper investigates the application of Convolutional Neural Networks (CNNs) with the aim of building a versatile recognizer of expressions in static images that can be further applied to video sequences. We first study the importance of different face parts in the recognition task, focussing on appearance and shape-related features. Then we cast the learning problem in the Semi-Supervised setting, exploiting video data, where only a few frames are supervised. The unsupervised portion of the training data is used to enforce three types of coherence, namely temporal coherence, coherence among the predictions on the face parts and coherence between appearance and shape-based representation. Our experimental analysis shows that coherence constraints can improve the quality of the expression recognizer, thus offering a suitable basis to profitably exploit unsupervised video sequences. Finally we present some examples with occlusions where the shape-based predictor performs better than the appearance one.


  Access Model/Code and Paper
Robust Neural Abstractive Summarization Systems and Evaluation against Adversarial Information

Oct 14, 2018
Lisa Fan, Dong Yu, Lu Wang

Sequence-to-sequence (seq2seq) neural models have been actively investigated for abstractive summarization. Nevertheless, existing neural abstractive systems frequently generate factually incorrect summaries and are vulnerable to adversarial information, suggesting a crucial lack of semantic understanding. In this paper, we propose a novel semantic-aware neural abstractive summarization model that learns to generate high quality summaries through semantic interpretation over salient content. A novel evaluation scheme with adversarial samples is introduced to measure how well a model identifies off-topic information, where our model yields significantly better performance than the popular pointer-generator summarizer. Human evaluation also confirms that our system summaries are uniformly more informative and faithful as well as less redundant than the seq2seq model.


  Access Model/Code and Paper
Commonsense for Generative Multi-Hop Question Answering Tasks

Sep 17, 2018
Lisa Bauer, Yicheng Wang, Mohit Bansal

Reading comprehension QA tasks have seen a recent surge in popularity, yet most works have focused on fact-finding extractive QA. We instead focus on a more challenging multi-hop generative task (NarrativeQA), which requires the model to reason, gather, and synthesize disjoint pieces of information within the context to generate an answer. This type of multi-step reasoning also often requires understanding implicit relations, which humans resolve via external, background commonsense knowledge. We first present a strong generative baseline that uses a multi-attention mechanism to perform multiple hops of reasoning and a pointer-generator decoder to synthesize the answer. This model performs substantially better than previous generative models, and is competitive with current state-of-the-art span prediction models. We next introduce a novel system for selecting grounded multi-hop relational commonsense information from ConceptNet via a pointwise mutual information and term-frequency based scoring function. Finally, we effectively use this extracted commonsense information to fill in gaps of reasoning between context hops, using a selectively-gated attention mechanism. This boosts the model's performance significantly (also verified via human evaluation), establishing a new state-of-the-art for the task. We also show that our background knowledge enhancements are generalizable and improve performance on QAngaroo-WikiHop, another multi-hop reasoning dataset.

* EMNLP 2018 (22 pages) 

  Access Model/Code and Paper