Models, code, and papers for "Gil Weinberg":
As human-robot collaboration opportunities continue to expand, trust becomes ever more important for full engagement and utilization of robots. Affective trust, built on emotional relationship and interpersonal bonds is particularly critical as it is more resilient to mistakes and increases the willingness to collaborate. In this paper we present a novel model built on music-driven emotional prosody and gestures that encourages the perception of a robotic identity, designed to avoid uncanny valley. Symbolic musical phrases were generated and tagged with emotional information by human musicians. These phrases controlled a synthesis engine playing back pre-rendered audio samples generated through interpolation of phonemes and electronic instruments. Gestures were also driven by the symbolic phrases, encoding the emotion from the musical phrase to low degree-of-freedom movements. Through a user study we showed that our system was able to accurately portray a range of emotions to the user. We also showed with a significant result that our non-linguistic audio generation achieved an 8% higher mean of average trust than using a state-of-the-art text-to-speech system.
Several methods exist for a computer to generate music based on data including Markov chains, recurrent neural networks, recombinancy, and grammars. We explore the use of unit selection and concatenation as a means of generating music using a procedure based on ranking, where, we consider a unit to be a variable length number of measures of music. We first examine whether a unit selection method, that is restricted to a finite size unit library, can be sufficient for encompassing a wide spectrum of music. We do this by developing a deep autoencoder that encodes a musical input and reconstructs the input by selecting from the library. We then describe a generative model that combines a deep structured semantic model (DSSM) with an LSTM to predict the next unit, where units consist of four, two, and one measures of music. We evaluate the generative model using objective metrics including mean rank and accuracy and with a subjective listening test in which expert musicians are asked to complete a forced-choiced ranking task. We compare our model to a note-level generative baseline that consists of a stacked LSTM trained to predict forward by one note.
The design and evaluation of a robotic prosthesis for a drummer with a transradial amputation is presented. The principal objective of the prosthesis is to simulate the role fingers play in drumming. This primarily includes controlling the manner in which the drum stick rebounds after initial impact. This is achieved using a DC motor driven by a variable impedance control framework in a shared control system. The user's ability to perform with and control the prosthesis is evaluated using a musical synchronization study. A secondary objective of the prosthesis is to explore the implications of musical expression and human-robotic interaction when a second, completely autonomous, stick is added to the prosthesis. This wearable robotic musician interacts with the user by listening to the music and responding with different rhythms and behaviors. We recount some empirical findings based on the user's experience of performing under such a paradigm.
Supernumerary Robotic Limbs (SRLs) exhibit inherently compliant behavior due to the elasticity present at the intersection of human tissue and the robot. This compliance, can prominently influence the operation of some SRLs, depending on the application. In order to control the residual vibrations of SRLs, we have used an input-shaping method which is a computationally inexpensive approach. The effectiveness of this method in controlling the residual vibrations of a SRL has been proven using robustness analysis. User studies show that reducing the vibrations using input shaping directly increases the user satisfaction and comfort by at least 9%. It is also observed that 36% of the users preferred unshaped commands. We hypothesize that the shaped commands put a higher cognitive load on the user compared to unshaped commands. This shows that when dealing with human-robot interaction, user satisfaction becomes an equally important parameter as traditional performance criteria and should be taken into account while evaluating the success of any vibration-control method.