Models, code, and papers for "Chen Yu-Ming":

Optimal Reduced-order Modeling of Bipedal Locomotion

Sep 23, 2019
Yu-Ming Chen, Michael Posa

State-of-the-art approaches to legged locomotion are widely dependent on the use of models like the linear inverted pendulum (LIP) and the spring-loaded inverted pendulum (SLIP), popular because their simplicity enables a wide array of tools for planning, control, and analysis. However, they inevitably limit the ability to execute complex tasks or agile maneuvers. In this work, we aim to automatically synthesize models that remain low-dimensional but retain the capabilities of the high-dimensional system. For example, if one were to restore a small degree of complexity to LIP, SLIP, or a similar model, our approach discovers the form of that additional complexity which optimizes performance. In this paper, we define a class of reduced-order models and provide an algorithm for optimization within this class. To demonstrate our method, we optimize models for walking at a range of speeds and ground inclines, for both a five-link model and the Cassie bipedal robot.

* Submitted to ICRA 2020 

  Click for Model/Code and Paper
Learning Stable and Energetically Economical Walking with RAMone

Nov 03, 2017
Audrow Nash, Yu-Ming Chen, Nils Smit-Anseeuw, Petr Zaytsev, C. David Remy

In this paper, we optimize over the control parameter space of our planar-bipedal robot, RAMone, for stable and energetically economical walking at various speeds. We formulate this task as an episodic reinforcement learning problem and use Covariance Matrix Adaptation. The parameters we are interested in modifying include gains from our Hybrid Zero Dynamics style controller and from RAMone's low-level motor controllers.

  Click for Model/Code and Paper
Virtual-to-Real: Learning to Control in Visual Semantic Segmentation

Oct 28, 2018
Zhang-Wei Hong, Chen Yu-Ming, Shih-Yang Su, Tzu-Yun Shann, Yi-Hsiang Chang, Hsuan-Kung Yang, Brian Hsi-Lin Ho, Chih-Chieh Tu, Yueh-Chuan Chang, Tsu-Ching Hsiao, Hsin-Wei Hsiao, Sih-Pin Lai, Chun-Yi Lee

Collecting training data from the physical world is usually time-consuming and even dangerous for fragile robots, and thus, recent advances in robot learning advocate the use of simulators as the training platform. Unfortunately, the reality gap between synthetic and real visual data prohibits direct migration of the models trained in virtual worlds to the real world. This paper proposes a modular architecture for tackling the virtual-to-real problem. The proposed architecture separates the learning model into a perception module and a control policy module, and uses semantic image segmentation as the meta representation for relating these two modules. The perception module translates the perceived RGB image to semantic image segmentation. The control policy module is implemented as a deep reinforcement learning agent, which performs actions based on the translated image segmentation. Our architecture is evaluated in an obstacle avoidance task and a target following task. Experimental results show that our architecture significantly outperforms all of the baseline methods in both virtual and real environments, and demonstrates a faster learning curve than them. We also present a detailed analysis for a variety of variant configurations, and validate the transferability of our modular architecture.

* 7 pages, accepted by IJCAI-18 

  Click for Model/Code and Paper