Models, code, and papers for "Gino Brunner":

Attentive Multi-Task Deep Reinforcement Learning

Jul 05, 2019
Timo Bram, Gino Brunner, Oliver Richter, Roger Wattenhofer

Sharing knowledge between tasks is vital for efficient learning in a multi-task setting. However, most research so far has focused on the easier case where knowledge transfer is not harmful, i.e., where knowledge from one task cannot negatively impact the performance on another task. In contrast, we present an approach to multi-task deep reinforcement learning based on attention that does not require any a-priori assumptions about the relationships between tasks. Our attention network automatically groups task knowledge into sub-networks on a state level granularity. It thereby achieves positive knowledge transfer if possible, and avoids negative transfer in cases where tasks interfere. We test our algorithm against two state-of-the-art multi-task/transfer learning approaches and show comparable or superior performance while requiring fewer network parameters.

* Accepted as conference paper at ECML PKDD 2019 

  Click for Model/Code and Paper
Using State Predictions for Value Regularization in Curiosity Driven Deep Reinforcement Learning

Sep 30, 2018
Gino Brunner, Manuel Fritsche, Oliver Richter, Roger Wattenhofer

Learning in sparse reward settings remains a challenge in Reinforcement Learning, which is often addressed by using intrinsic rewards. One promising strategy is inspired by human curiosity, requiring the agent to learn to predict the future. In this paper a curiosity-driven agent is extended to use these predictions directly for training. To achieve this, the agent predicts the value function of the next state at any point in time. Subsequently, the consistency of this prediction with the current value function is measured, which is then used as a regularization term in the loss function of the algorithm. Experiments were made on grid-world environments as well as on a 3D navigation task, both with sparse rewards. In the first case the extended agent is able to learn significantly faster than the baselines.


  Click for Model/Code and Paper
The Urban Last Mile Problem: Autonomous Drone Delivery to Your Balcony

Sep 21, 2018
Gino Brunner, Bence Szebedy, Simon Tanner, Roger Wattenhofer

Drone delivery has been a hot topic in the industry in the past few years. However, existing approaches either focus on rural areas or rely on centralized drop-off locations from where the last mile delivery is performed. In this paper we tackle the problem of autonomous last mile delivery in urban environments using an off-the-shelf drone. We build a prototype system that is able to fly to the approximate delivery location using GPS and then find the exact drop-off location using visual navigation. The drop-off location could, e.g., be on a balcony or porch, and simply needs to be indicated by a visual marker on the wall or window. We test our system components in simulated environments, including the visual navigation and collision avoidance. Finally, we deploy our drone in a real-world environment and show how it can find the drop-off point on a balcony. To stimulate future research in this topic we open source our code.


  Click for Model/Code and Paper
MIDI-VAE: Modeling Dynamics and Instrumentation of Music with Applications to Style Transfer

Sep 20, 2018
Gino Brunner, Andres Konrad, Yuyi Wang, Roger Wattenhofer

We introduce MIDI-VAE, a neural network model based on Variational Autoencoders that is capable of handling polyphonic music with multiple instrument tracks, as well as modeling the dynamics of music by incorporating note durations and velocities. We show that MIDI-VAE can perform style transfer on symbolic music by automatically changing pitches, dynamics and instruments of a music piece from, e.g., a Classical to a Jazz style. We evaluate the efficacy of the style transfer by training separate style validation classifiers. Our model can also interpolate between short pieces of music, produce medleys and create mixtures of entire songs. The interpolations smoothly change pitches, dynamics and instrumentation to create a harmonic bridge between two music pieces. To the best of our knowledge, this work represents the first successful attempt at applying neural style transfer to complete musical compositions.

* Paper accepted at the 19th International Society for Music Information Retrieval Conference, ISMIR 2018, Paris, France 

  Click for Model/Code and Paper
Symbolic Music Genre Transfer with CycleGAN

Sep 20, 2018
Gino Brunner, Yuyi Wang, Roger Wattenhofer, Sumu Zhao

Deep generative models such as Variational Autoencoders (VAEs) and Generative Adversarial Networks (GANs) have recently been applied to style and domain transfer for images, and in the case of VAEs, music. GAN-based models employing several generators and some form of cycle consistency loss have been among the most successful for image domain transfer. In this paper we apply such a model to symbolic music and show the feasibility of our approach for music genre transfer. Evaluations using separate genre classifiers show that the style transfer works well. In order to improve the fidelity of the transformed music, we add additional discriminators that cause the generators to keep the structure of the original music mostly intact, while still achieving strong genre transfer. Visual and audible results further show the potential of our approach. To the best of our knowledge, this paper represents the first application of GANs to symbolic music domain transfer.

* Paper accepted at the 30th International Conference on Tools with Artificial Intelligence, ICTAI 2018, Volos, Greece 

  Click for Model/Code and Paper
Natural Language Multitasking: Analyzing and Improving Syntactic Saliency of Hidden Representations

Jan 18, 2018
Gino Brunner, Yuyi Wang, Roger Wattenhofer, Michael Weigelt

We train multi-task autoencoders on linguistic tasks and analyze the learned hidden sentence representations. The representations change significantly when translation and part-of-speech decoders are added. The more decoders a model employs, the better it clusters sentences according to their syntactic similarity, as the representation space becomes less entangled. We explore the structure of the representation space by interpolating between sentences, which yields interesting pseudo-English sentences, many of which have recognizable syntactic structure. Lastly, we point out an interesting property of our models: The difference-vector between two sentences can be added to change a third sentence with similar features in a meaningful way.

* The 31st Annual Conference on Neural Information Processing (NIPS) - Workshop on Learning Disentangled Features: from Perception to Control, Long Beach, CA, December 2017 

  Click for Model/Code and Paper
JamBot: Music Theory Aware Chord Based Generation of Polyphonic Music with LSTMs

Nov 21, 2017
Gino Brunner, Yuyi Wang, Roger Wattenhofer, Jonas Wiesendanger

We propose a novel approach for the generation of polyphonic music based on LSTMs. We generate music in two steps. First, a chord LSTM predicts a chord progression based on a chord embedding. A second LSTM then generates polyphonic music from the predicted chord progression. The generated music sounds pleasing and harmonic, with only few dissonant notes. It has clear long-term structure that is similar to what a musician would play during a jam session. We show that our approach is sensible from a music theory perspective by evaluating the learned chord embeddings. Surprisingly, our simple model managed to extract the circle of fifths, an important tool in music theory, from the dataset.

* Paper presented at the 29th International Conference on Tools with Artificial Intelligence, ICTAI 2017, Boston, MA, USA 

  Click for Model/Code and Paper
Teaching a Machine to Read Maps with Deep Reinforcement Learning

Nov 20, 2017
Gino Brunner, Oliver Richter, Yuyi Wang, Roger Wattenhofer

The ability to use a 2D map to navigate a complex 3D environment is quite remarkable, and even difficult for many humans. Localization and navigation is also an important problem in domains such as robotics, and has recently become a focus of the deep reinforcement learning community. In this paper we teach a reinforcement learning agent to read a map in order to find the shortest way out of a random maze it has never seen before. Our system combines several state-of-the-art methods such as A3C and incorporates novel elements such as a recurrent localization cell. Our agent learns to localize itself based on 3D first person images and an approximate orientation angle. The agent generalizes well to bigger mazes, showing that it learned useful localization and navigation capabilities.

* Paper accepted at 32nd AAAI Conference on Artificial Intelligence, AAAI 2018, New Orleans, Louisiana, USA 

  Click for Model/Code and Paper
On the Validity of Self-Attention as Explanation in Transformer Models

Aug 12, 2019
Gino Brunner, Yang Liu, Damián Pascual, Oliver Richter, Roger Wattenhofer

Explainability of deep learning systems is a vital requirement for many applications. However, it is still an unsolved problem. Recent self-attention based models for natural language processing, such as the Transformer or BERT, offer hope of greater explainability by providing attention maps that can be directly inspected. Nevertheless, by just looking at the attention maps one often overlooks that the attention is not over words but over hidden embeddings, which themselves can be mixed representations of multiple embeddings. We investigate to what extent the implicit assumption made in many recent papers - that hidden embeddings at all layers still correspond to the underlying words - is justified. We quantify how much embeddings are mixed based on a gradient based attribution method and find that already after the first layer less than 50% of the embedding is attributed to the underlying word, declining thereafter to a median contribution of 7.5% in the last layer. While throughout the layers the underlying word remains as the one contributing most to the embedding, we argue that attention visualizations are misleading and should be treated with care when explaining the underlying deep learning system.

* Preprint. Work in progress 

  Click for Model/Code and Paper