Alert button
Picture for Prashant Doshi

Prashant Doshi

Alert button

University of Georgia

MVSA-Net: Multi-View State-Action Recognition for Robust and Deployable Trajectory Generation

Add code
Bookmark button
Alert button
Nov 18, 2023
Ehsan Asali, Prashant Doshi, Jin Sun

Viaarxiv icon

A Novel Variational Lower Bound for Inverse Reinforcement Learning

Add code
Bookmark button
Alert button
Nov 10, 2023
Yikang Gui, Prashant Doshi

Figure 1 for A Novel Variational Lower Bound for Inverse Reinforcement Learning
Figure 2 for A Novel Variational Lower Bound for Inverse Reinforcement Learning
Figure 3 for A Novel Variational Lower Bound for Inverse Reinforcement Learning
Figure 4 for A Novel Variational Lower Bound for Inverse Reinforcement Learning
Viaarxiv icon

Latent Interactive A2C for Improved RL in Open Many-Agent Systems

Add code
Bookmark button
Alert button
May 09, 2023
Keyang He, Prashant Doshi, Bikramjit Banerjee

Figure 1 for Latent Interactive A2C for Improved RL in Open Many-Agent Systems
Figure 2 for Latent Interactive A2C for Improved RL in Open Many-Agent Systems
Figure 3 for Latent Interactive A2C for Improved RL in Open Many-Agent Systems
Figure 4 for Latent Interactive A2C for Improved RL in Open Many-Agent Systems
Viaarxiv icon

IRL with Partial Observations using the Principle of Uncertain Maximum Entropy

Add code
Bookmark button
Alert button
Aug 15, 2022
Kenneth Bogert, Yikang Gui, Prashant Doshi

Figure 1 for IRL with Partial Observations using the Principle of Uncertain Maximum Entropy
Figure 2 for IRL with Partial Observations using the Principle of Uncertain Maximum Entropy
Figure 3 for IRL with Partial Observations using the Principle of Uncertain Maximum Entropy
Figure 4 for IRL with Partial Observations using the Principle of Uncertain Maximum Entropy
Viaarxiv icon

Marginal MAP Estimation for Inverse RL under Occlusion with Observer Noise

Add code
Bookmark button
Alert button
Sep 16, 2021
Prasanth Sengadu Suresh, Prashant Doshi

Figure 1 for Marginal MAP Estimation for Inverse RL under Occlusion with Observer Noise
Figure 2 for Marginal MAP Estimation for Inverse RL under Occlusion with Observer Noise
Figure 3 for Marginal MAP Estimation for Inverse RL under Occlusion with Observer Noise
Figure 4 for Marginal MAP Estimation for Inverse RL under Occlusion with Observer Noise
Viaarxiv icon

A Hierarchical Bayesian model for Inverse RL in Partially-Controlled Environments

Add code
Bookmark button
Alert button
Jul 13, 2021
Kenneth Bogert, Prashant Doshi

Figure 1 for A Hierarchical Bayesian model for Inverse RL in Partially-Controlled Environments
Figure 2 for A Hierarchical Bayesian model for Inverse RL in Partially-Controlled Environments
Figure 3 for A Hierarchical Bayesian model for Inverse RL in Partially-Controlled Environments
Figure 4 for A Hierarchical Bayesian model for Inverse RL in Partially-Controlled Environments
Viaarxiv icon

Many Agent Reinforcement Learning Under Partial Observability

Add code
Bookmark button
Alert button
Jun 17, 2021
Keyang He, Prashant Doshi, Bikramjit Banerjee

Figure 1 for Many Agent Reinforcement Learning Under Partial Observability
Figure 2 for Many Agent Reinforcement Learning Under Partial Observability
Figure 3 for Many Agent Reinforcement Learning Under Partial Observability
Figure 4 for Many Agent Reinforcement Learning Under Partial Observability
Viaarxiv icon

Cooperative-Competitive Reinforcement Learning with History-Dependent Rewards

Add code
Bookmark button
Alert button
Oct 15, 2020
Keyang He, Bikramjit Banerjee, Prashant Doshi

Figure 1 for Cooperative-Competitive Reinforcement Learning with History-Dependent Rewards
Figure 2 for Cooperative-Competitive Reinforcement Learning with History-Dependent Rewards
Figure 3 for Cooperative-Competitive Reinforcement Learning with History-Dependent Rewards
Figure 4 for Cooperative-Competitive Reinforcement Learning with History-Dependent Rewards
Viaarxiv icon

Recurrent Sum-Product-Max Networks for Decision Making in Perfectly-Observed Environments

Add code
Bookmark button
Alert button
Jun 12, 2020
Hari Teja Tatavarti, Prashant Doshi, Layton Hayes

Figure 1 for Recurrent Sum-Product-Max Networks for Decision Making in Perfectly-Observed Environments
Figure 2 for Recurrent Sum-Product-Max Networks for Decision Making in Perfectly-Observed Environments
Figure 3 for Recurrent Sum-Product-Max Networks for Decision Making in Perfectly-Observed Environments
Figure 4 for Recurrent Sum-Product-Max Networks for Decision Making in Perfectly-Observed Environments
Viaarxiv icon

Maximum Entropy Multi-Task Inverse RL

Add code
Bookmark button
Alert button
Apr 27, 2020
Saurabh Arora, Bikramjit Banerjee, Prashant Doshi

Figure 1 for Maximum Entropy Multi-Task Inverse RL
Figure 2 for Maximum Entropy Multi-Task Inverse RL
Figure 3 for Maximum Entropy Multi-Task Inverse RL
Viaarxiv icon