Models, code, and papers for "Mykel Kochenderfer":

Efficient Autonomy Validation in Simulation with Adaptive Stress Testing

Jul 16, 2019
Mark Koren, Mykel Kochenderfer

During the development of autonomous systems such as driverless cars, it is important to characterize the scenarios that are most likely to result in failure. Adaptive Stress Testing (AST) provides a way to search for the most-likely failure scenario as a Markov decision process (MDP). Our previous work used a deep reinforcement learning (DRL) solver to identify likely failure scenarios. However, the solver's use of a feed-forward neural network with a discretized space of possible initial conditions poses two major problems. First, the system is not treated as a black box, in that it requires analyzing the internal state of the system, which leads to considerable implementation complexities. Second, in order to simulate realistic settings, a new instance of the solver needs to be run for each initial condition. Running a new solver for each initial condition not only significantly increases the computational complexity, but also disregards the underlying relationship between similar initial conditions. We provide a solution to both problems by employing a recurrent neural network that takes a set of initial conditions from a continuous space as input. This approach enables robust and efficient detection of failures because the solution generalizes across the entire space of initial conditions. By simulating an instance where an autonomous car drives while a pedestrian is crossing a road, we demonstrate the solver is now capable of finding solutions for problems that would have previously been intractable.

* Submitted to IEEE ITSC 2019 

  Click for Model/Code and Paper
Online algorithms for POMDPs with continuous state, action, and observation spaces

Sep 06, 2018
Zachary Sunberg, Mykel Kochenderfer

Online solvers for partially observable Markov decision processes have been applied to problems with large discrete state spaces, but continuous state, action, and observation spaces remain a challenge. This paper begins by investigating double progressive widening (DPW) as a solution to this challenge. However, we prove that this modification alone is not sufficient because the belief representations in the search tree collapse to a single particle causing the algorithm to converge to a policy that is suboptimal regardless of the computation time. This paper proposes and evaluates two new algorithms, POMCPOW and PFT-DPW, that overcome this deficiency by using weighted particle filtering. Simulation results show that these modifications allow the algorithms to be successful where previous approaches fail.

* Short version published in 2018 proceedings of the International Conference on Automated Planning and Scheduling (ICAPS) 
* Added Multilane section 

  Click for Model/Code and Paper
Hybrid Planning for Dynamic Multimodal Stochastic Shortest Paths

Jun 21, 2019
Shushman Choudhury, Mykel J. Kochenderfer

Sequential decision problems in applications such as manipulation in warehouses, multi-step meal preparation, and routing in autonomous vehicle networks often involve reasoning about uncertainty, planning over discrete modes as well as continuous states, and reacting to dynamic updates. To formalize such problems generally, we introduce a class of Markov Decision Processes (MDPs) called Dynamic Multimodal Stochastic Shortest Paths (DMSSPs). Much of the work in these domains solves deterministic variants, which can yield poor results when the uncertainty has downstream effects. We develop a Hybrid Stochastic Planning (HSP) algorithm, which uses domain-agnostic abstractions to efficiently unify heuristic search for planning over discrete modes, approximate dynamic programming for stochastic planning over continuous states, and hierarchical interleaved planning and execution. In the domain of autonomous multimodal routing, HSP obtains significantly higher quality solutions than a state-of-the-art Upper Confidence Trees algorithm and a two-level Receding Horizon Control algorithm.

* 20 pages, 5 figures, 5 tables; Under Review 

  Click for Model/Code and Paper
Dynamic Real-time Multimodal Routing with Hierarchical Hybrid Planning

Feb 05, 2019
Shushman Choudhury, Mykel J. Kochenderfer

We introduce the problem of Dynamic Real-time Multimodal Routing (DREAMR), which requires planning and executing routes under uncertainty for an autonomous agent. The agent has access to a time-varying transit vehicle network in which it can use multiple modes of transportation. For instance, a drone can either fly or ride on terrain vehicles for segments of their routes. DREAMR is a difficult problem of sequential decision making under uncertainty with both discrete and continuous variables. We design a novel hierarchical hybrid planning framework to solve the DREAMR problem that exploits its structural decomposability. Our framework consists of a global open-loop planning layer that invokes and monitors a local closed-loop execution layer. Additional abstractions allow efficient and seamless interleaving of planning and execution. We create a large-scale simulation for DREAMR problems, with each scenario having hundreds of transportation routes and thousands of connection points. Our algorithmic framework significantly outperforms a receding horizon control baseline, in terms of elapsed time to reach the destination and energy expended by the agent.

* 8 pages 

  Click for Model/Code and Paper
Using Neural Networks to Generate Information Maps for Mobile Sensors

Sep 26, 2018
Louis Dressel, Mykel J. Kochenderfer

Target localization is a critical task for mobile sensors and has many applications. However, generating informative trajectories for these sensors is a challenging research problem. A common method uses information maps that estimate the value of taking measurements from any point in the sensor state space. These information maps are used to generate trajectories; for example, a trajectory might be designed so its distribution of measurements matches the distribution of the information map. Regardless of the trajectory generation method, generating information maps as new observations are made is critical. However, it can be challenging to compute these maps in real-time. We propose using convolutional neural networks to generate information maps from a target estimate and sensor model in real-time. Simulations show that maps are accurately rendered while offering orders of magnitude reduction in computation time.

* Accepted to the 2018 IEEE Conference on Decision and Control (CDC) 

  Click for Model/Code and Paper
On the Optimality of Ergodic Trajectories for Information Gathering Tasks

Aug 20, 2018
Louis Dressel, Mykel J. Kochenderfer

Recently, ergodic control has been suggested as a means to guide mobile sensors for information gathering tasks. In ergodic control, a mobile sensor follows a trajectory that is ergodic with respect to some information density distribution. A trajectory is ergodic if time spent in a state space region is proportional to the information density of the region. Although ergodic control has shown promising experimental results, there is little understanding of why it works or when it is optimal. In this paper, we study a problem class under which optimal information gathering trajectories are ergodic. This class relies on a submodularity assumption for repeated measurements from the same state. It is assumed that information available in a region decays linearly with time spent there. This assumption informs selection of the horizon used in ergodic trajectory generation. We support our claims with a set of experiments that demonstrate the link between ergodicity, optimal information gathering, and submodularity.

* Presented at 2018 American Control Conference (ACC) 

  Click for Model/Code and Paper
Efficient and Low-cost Localization of Radio Signals with a Multirotor UAV

Aug 13, 2018
Louis Dressel, Mykel J. Kochenderfer

Localizing radio frequency (RF) sources with an unmanned aerial vehicle (UAV) has many important applications. As a result, UAV-based localization has been the focus of much research. However, previous approaches rely heavily on custom electronics and specialized knowledge, are not robust and require extensive calibration, or are inefficient with measurements and waste energy on a battery-constrained platform. In this work, we present a system based on a multirotor UAV that addresses these shortcomings. Our system measures signal strength received by two antennas to update a probability distribution over possible transmitter locations. An information-theoretic controller is used to direct the UAV's search. Signal strength is measured with low-cost, commercial-off-the-shelf components. We demonstrate our system using three transmitters: a continuous signal in the UHF band, a wildlife collar pulsing in the VHF band, and a cell phone making a voice call over LTE. Our system significantly outperforms previous methods, localizing the RF source in the same time it takes previous methods to make a single measurement.

* Best student paper at 2018 AIAA GNC conference 

  Click for Model/Code and Paper
Burn-In Demonstrations for Multi-Modal Imitation Learning

Oct 13, 2017
Alex Kuefler, Mykel J. Kochenderfer

Recent work on imitation learning has generated policies that reproduce expert behavior from multi-modal data. However, past approaches have focused only on recreating a small number of distinct, expert maneuvers, or have relied on supervised learning techniques that produce unstable policies. This work extends InfoGAIL, an algorithm for multi-modal imitation learning, to reproduce behavior over an extended period of time. Our approach involves reformulating the typical imitation learning setting to include "burn-in demonstrations" upon which policies are conditioned at test time. We demonstrate that our approach outperforms standard InfoGAIL in maximizing the mutual information between predicted and unseen style labels in road scene simulations, and we show that our method leads to policies that imitate expert autonomous driving systems over long time horizons.

* 1st Conference on Robotic Learning, Non-archival Track 

  Click for Model/Code and Paper
Simultaneous Policy Learning and Latent State Inference for Imitating Driver Behavior

Apr 19, 2017
Jeremy Morton, Mykel J. Kochenderfer

In this work, we propose a method for learning driver models that account for variables that cannot be observed directly. When trained on a synthetic dataset, our models are able to learn encodings for vehicle trajectories that distinguish between four distinct classes of driver behavior. Such encodings are learned without any knowledge of the number of driver classes or any objective that directly requires the models to learn encodings for each class. We show that driving policies trained with knowledge of latent variables are more effective than baseline methods at imitating the driver behavior that they are trained to replicate. Furthermore, we demonstrate that the actions chosen by our policy are heavily influenced by the latent variable settings that are provided to them.

* 7 pages, 6 figures, 2 tables 

  Click for Model/Code and Paper
Optimizing Collision Avoidance in Dense Airspace using Deep Reinforcement Learning

Dec 20, 2019
Sheng Li, Maxim Egorov, Mykel Kochenderfer

New methodologies will be needed to ensure the airspace remains safe and efficient as traffic densities rise to accommodate new unmanned operations. This paper explores how unmanned free-flight traffic may operate in dense airspace. We develop and analyze autonomous collision avoidance systems for aircraft operating in dense airspace where traditional collision avoidance systems fail. We propose a metric for quantifying the decision burden on a collision avoidance system as well as a metric for measuring the impact of the collision avoidance system on airspace. We use deep reinforcement learning to compute corrections for an existing collision avoidance approach to account for dense airspace. The results show that a corrected collision avoidance system can operate more efficiently than traditional methods in dense airspace while maintaining high levels of safety.

* Thirteenth USA/Europe Air Traffic Management Research and Development Seminar 

  Click for Model/Code and Paper
Object Exchangeability in Reinforcement Learning: Extended Abstract

May 07, 2019
John Mern, Dorsa Sadigh, Mykel Kochenderfer

Although deep reinforcement learning has advanced significantly over the past several years, sample efficiency remains a major challenge. Careful choice of input representations can help improve efficiency depending on the structure present in the problem. In this work, we present an attention-based method to project inputs into an efficient representation space that is invariant under changes to input ordering. We show that our proposed representation results in a search space that is a factor of m! smaller for inputs of m objects. Our experiments demonstrate improvements in sample efficiency for policy gradient methods on a variety of tasks. We show that our representation allows us to solve problems that are otherwise intractable when using naive approaches.

* In Proceedings of the 18th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2019), Montreal,Canada, May 13 to 17, 2019,IFAAMAS, 3 pages 

  Click for Model/Code and Paper
Image-based Guidance of Autonomous Aircraft for Wildfire Surveillance and Prediction

Mar 01, 2019
Kyle D. Julian, Mykel J. Kochenderfer

Small unmanned aircraft can help firefighters combat wildfires by providing real-time surveillance of the growing fires. However, guiding the aircraft autonomously given only wildfire images is a challenging problem. This work models noisy images obtained from on-board cameras and proposes two approaches to filtering the wildfire images. The first approach uses a simple Kalman filter to reduce noise and update a belief map in observed areas. The second approach uses a particle filter to predict wildfire growth and uses observations to estimate uncertainties relating to wildfire expansion. The belief maps are used to train a deep reinforcement learning controller, which learns a policy to navigate the aircraft to survey the wildfire while avoiding flight directly over the fire. Simulation results show that the proposed controllers precisely guide the aircraft and accurately estimate wildfire growth, and a study of observation noise demonstrates the robustness of the particle filter approach.


  Click for Model/Code and Paper
Learning Probabilistic Trajectory Models of Aircraft in Terminal Airspace from Position Data

Oct 22, 2018
Shane Barratt, Mykel Kochenderfer, Stephen Boyd

Models for predicting aircraft motion are an important component of modern aeronautical systems. These models help aircraft plan collision avoidance maneuvers and help conduct offline performance and safety analyses. In this article, we develop a method for learning a probabilistic generative model of aircraft motion in terminal airspace, the controlled airspace surrounding a given airport. The method fits the model based on a historical dataset of radar-based position measurements of aircraft landings and takeoffs at that airport. We find that the model generates realistic trajectories, provides accurate predictions, and captures the statistical properties of aircraft trajectories. Furthermore, the model trains quickly, is compact, and allows for efficient real-time inference.

* IEEE Transactions on Intelligent Transportation Systems 

  Click for Model/Code and Paper
Distributed Wildfire Surveillance with Autonomous Aircraft using Deep Reinforcement Learning

Oct 09, 2018
Kyle D. Julian, Mykel J. Kochenderfer

Teams of autonomous unmanned aircraft can be used to monitor wildfires, enabling firefighters to make informed decisions. However, controlling multiple autonomous fixed-wing aircraft to maximize forest fire coverage is a complex problem. The state space is high dimensional, the fire propagates stochastically, the sensor information is imperfect, and the aircraft must coordinate with each other to accomplish their mission. This work presents two deep reinforcement learning approaches for training decentralized controllers that accommodate the high dimensionality and uncertainty inherent in the problem. The first approach controls the aircraft using immediate observations of the individual aircraft. The second approach allows aircraft to collaborate on a map of the wildfire's state and maintain a time history of locations visited, which are used as inputs to the controller. Simulation results show that both approaches allow the aircraft to accurately track wildfire expansions and outperform an online receding horizon controller. Additional simulations demonstrate that the approach scales with different numbers of aircraft and generalizes to different wildfire shapes.


  Click for Model/Code and Paper
The Value of Inferring the Internal State of Traffic Participants for Autonomous Freeway Driving

Feb 02, 2017
Zachary Sunberg, Christopher Ho, Mykel Kochenderfer

Safe interaction with human drivers is one of the primary challenges for autonomous vehicles. In order to plan driving maneuvers effectively, the vehicle's control system must infer and predict how humans will behave based on their latent internal state (e.g., intentions and aggressiveness). This research uses a simple model for human behavior with unknown parameters that make up the internal states of the traffic participants and presents a method for quantifying the value of estimating these states and planning with their uncertainty explicitly modeled. An upper performance bound is established by an omniscient Monte Carlo Tree Search (MCTS) planner that has perfect knowledge of the internal states. A baseline lower bound is established by planning with MCTS assuming that all drivers have the same internal state. MCTS variants are then used to solve a partially observable Markov decision process (POMDP) that models the internal state uncertainty to determine whether inferring the internal state offers an advantage over the baseline. Applying this method to a freeway lane changing scenario reveals that there is a significant performance gap between the upper bound and baseline. POMDP planning techniques come close to closing this gap, especially when important hidden model parameters are correlated with measurable parameters.


  Click for Model/Code and Paper
Point-Based Methods for Model Checking in Partially Observable Markov Decision Processes

Jan 11, 2020
Maxime Bouton, Jana Tumova, Mykel J. Kochenderfer

Autonomous systems are often required to operate in partially observable environments. They must reliably execute a specified objective even with incomplete information about the state of the environment. We propose a methodology to synthesize policies that satisfy a linear temporal logic formula in a partially observable Markov decision process (POMDP). By formulating a planning problem, we show how to use point-based value iteration methods to efficiently approximate the maximum probability of satisfying a desired logical formula and compute the associated belief state policy. We demonstrate that our method scales to large POMDP domains and provides strong bounds on the performance of the resulting policy.

* AAAI 2020 
* 8 pages, 3 figures, AAAI 2020 

  Click for Model/Code and Paper
Robust Super-Level Set Estimation using Gaussian Processes

Nov 25, 2018
Andrea Zanette, Junzi Zhang, Mykel J. Kochenderfer

This paper focuses on the problem of determining as large a region as possible where a function exceeds a given threshold with high probability. We assume that we only have access to a noise-corrupted version of the function and that function evaluations are costly. To select the next query point, we propose maximizing the expected volume of the domain identified as above the threshold as predicted by a Gaussian process, robustified by a variance term. We also give asymptotic guarantees on the exploration effect of the algorithm, regardless of the prior misspecification. We show by various numerical examples that our approach also outperforms existing techniques in the literature in practice.

* Accepted to ECML 2018 

  Click for Model/Code and Paper
Closed-Loop Policies for Operational Tests of Safety-Critical Systems

May 19, 2018
Jeremy Morton, Tim A. Wheeler, Mykel J. Kochenderfer

Manufacturers of safety-critical systems must make the case that their product is sufficiently safe for public deployment. Much of this case often relies upon critical event outcomes from real-world testing, requiring manufacturers to be strategic about how they allocate testing resources in order to maximize their chances of demonstrating system safety. This work frames the partially observable and belief-dependent problem of test scheduling as a Markov decision process, which can be solved efficiently to yield closed-loop manufacturer testing policies. By solving for policies over a wide range of problem formulations, we are able to provide high-level guidance for manufacturers and regulators on issues relating to the testing of safety-critical systems. This guidance spans an array of topics, including circumstances under which manufacturers should continue testing despite observed incidents, when manufacturers should test aggressively, and when regulators should increase or reduce the real-world testing requirements for an autonomous vehicle.

* 12 pages, 5 figures, 5 tables 

  Click for Model/Code and Paper
Layer-wise synapse optimization for implementing neural networks on general neuromorphic architectures

Feb 20, 2018
John Mern, Jayesh K Gupta, Mykel Kochenderfer

Deep artificial neural networks (ANNs) can represent a wide range of complex functions. Implementing ANNs in Von Neumann computing systems, though, incurs a high energy cost due to the bottleneck created between CPU and memory. Implementation on neuromorphic systems may help to reduce energy demand. Conventional ANNs must be converted into equivalent Spiking Neural Networks (SNNs) in order to be deployed on neuromorphic chips. This paper presents a way to perform this translation. We map the ANN weights to SNN synapses layer-by-layer by forming a least-square-error approximation problem at each layer. An optimal set of synapse weights may then be found for a given choice of ANN activation function and SNN neuron. Using an appropriate constrained solver, we can generate SNNs compatible with digital, analog, or hybrid chip architectures. We present an optimal node pruning method to allow SNN layer sizes to be set by the designer. To illustrate this process, we convert three ANNs, including one convolutional network, to SNNs. In all three cases, a simple linear program solver was used. The experiments show that the resulting networks maintain agreement with the original ANN and excellent performance on the evaluation tasks. The networks were also reduced in size with little loss in task performance.

* Submitted to IEEE Symposium Series on Computational Intelligence (SSCI) 2017 

  Click for Model/Code and Paper
Belief State Planning for Autonomously Navigating Urban Intersections

Apr 14, 2017
Maxime Bouton, Akansel Cosgun, Mykel J. Kochenderfer

Urban intersections represent a complex environment for autonomous vehicles with many sources of uncertainty. The vehicle must plan in a stochastic environment with potentially rapid changes in driver behavior. Providing an efficient strategy to navigate through urban intersections is a difficult task. This paper frames the problem of navigating unsignalized intersections as a partially observable Markov decision process (POMDP) and solves it using a Monte Carlo sampling method. Empirical results in simulation show that the resulting policy outperforms a threshold-based heuristic strategy on several relevant metrics that measure both safety and efficiency.

* 6 pages, 6 figures, accepted to IV2017 

  Click for Model/Code and Paper