Models, code, and papers for "Florian Richter":

Conflict Detection and Resolution in Table Top Scenarios for Human-Robot Interaction

Dec 29, 2019
Avinash Kumar Singh, Kai-Florian Richter

As in any interaction process, misunderstandings, ambiguity, and failures to correctly understand the interaction partner are bound to happen in human-robot interaction. We term these failures 'conflicts' and are interested in both conflict detection and conflict resolution. In that, we focus on the robot's perspective. For the robot, conflicts may occur because of errors in its perceptual processes or because of ambiguity stemming from human input. This poster presents a brief system overview, and details Here, we briefly outline the project's motivation and setting, introduce the general processing framework, and then present two kinds of conflicts in some more detail: 1) a failure to identify a relevant object at all; 2) ambiguity emerging from multiple matches in scene perception.


  Access Model/Code and Paper
Open-Sourced Reinforcement Learning Environments for Surgical Robotics

Mar 05, 2019
Florian Richter, Ryan K. Orosco, Michael C. Yip

Reinforcement Learning (RL) is a machine learning framework for artificially intelligent systems to solve a variety of complex problems. Recent years has seen a surge of successes solving challenging games and smaller domain problems, including simple though non-specific robotic manipulation and grasping tasks. Rapid successes in RL have come in part due to the strong collaborative effort by the RL community to work on common, open-sourced environment simulators such as OpenAI's Gym that allow for expedited development and valid comparisons between different, state-of-art strategies. In this paper, we aim to bridge the RL and the surgical robotics communities by presenting the first open-sourced reinforcement learning environments for surgical robotics, called dVRL. Through the proposed RL environment, which are functionally equivalent to Gym, we show that it is easy to prototype and implement state-of-art RL algorithms on surgical robotics problems that aim to introduce autonomous robotic precision and accuracy to assisting, collaborative, or repetitive tasks during surgery. Learned policies are furthermore successfully transferable to a real robot. Finally, combining dVRL with the over 40+ international network of da Vinci Surgical Research Kits in active use at academic institutions, we see dVRL as enabling the broad surgical robotics community to fully leverage the newest strategies in reinforcement learning, and for reinforcement learning scientists with no knowledge of surgical robotics to test and develop new algorithms that can solve the real-world, high-impact challenges in autonomous surgery.

* 7 pages, 9 Figures, submitted to IROS 2019 

  Access Model/Code and Paper
Motion Scaling Solutions for Improved Performance in High Delay Surgical Teleoperation

Feb 08, 2019
Florian Richter, Ryan K. Orosco, Michael C. Yip

Robotic teleoperation brings great potential for advances within the field of surgery. The ability of a surgeon to reach patient remotely opens exciting opportunities. Early experience with telerobotic surgery has been interesting, but the clinical feasibility remains out of reach, largely due to the deleterious effects of communication delays. Teleoperation tasks are significantly impacted by unavoidable signal latency, which directly results in slower operations, less precision in movements, and increased human errors. Introducing significant changes to the surgical workflow, for example by introducing semi-automation or self-correction, present too significant a technological and ethical burden for commercial surgical robotic systems to adopt. In this paper, we present three simple and intuitive motion scaling solutions to combat teleoperated robotic systems under delay and help improve operator accuracy. Motion scaling offers potentially improved user performance and reduction in errors with minimal change to the underlying teleoperation architecture. To validate the use of motion scaling as a performance enhancer in telesurgery, we conducted a user study with 17 participants, and our results show that the proposed solutions do indeed reduce the error rate when operating under high delay.

* 6 pages, 6 figures, ICRA 2019 Accepted 

  Access Model/Code and Paper
Model-free Visual Control for Continuum Robot Manipulators via Orientation Adaptation

Sep 01, 2019
Mrinal Verghese, Florian Richter, Aaron Gunn, Phil Weissbrod, Michael Yip

We present an orientation adaptive controller to compensate for the effects of highly constrained environments on continuum manipulator actuation. A transformation matrix updated using optimal estimation techniques from optical flow measurements captured by the distal camera is composed with any Jacobian estimation or kinematic model to compensate for these effects. By utilizing domain knowledge to define the structure of this matrix, fewer parameters need to be estimated and a stable controller can be guaranteed. The algorithm is tested on a custom robotic catheter and convergence is shown both empirically and theoretically.

* 12 pages, 5 figures, Accepted to The International Symposium on Robotics Research 2019 

  Access Model/Code and Paper
SuPer Deep: A Surgical Perception Framework for Robotic Tissue Manipulation using Deep Learning for Feature Extraction

Mar 07, 2020
Jingpei Lu, Ambareesh Jayakumari, Florian Richter, Yang Li, Michael C. Yip

Robotic automation in surgery requires precise tracking of surgical tools and mapping of deformable tissue. Previous works on surgical perception frameworks require significant effort in developing features for surgical tool and tissue tracking. In this work, we overcome the challenge by exploiting deep learning methods for surgical perception. We integrated deep neural networks, capable of efficient feature extraction, into the tissue reconstruction and instrument pose estimation processes. By leveraging transfer learning, the deep learning based approach requires minimal training data and reduced feature engineering efforts to fully perceive a surgical scene. The framework was tested on three publicly available datasets, which use the da Vinci Surgical System, for comprehensive analysis. Experimental results show that our framework achieves state-of-the-art tracking performance in a surgical environment by utilizing deep learning for feature extraction.


  Access Model/Code and Paper
Augmented Reality Predictive Displays to Help Mitigate the Effects of Delayed Telesurgery

Feb 21, 2019
Florian Richter, Yifei Zhang, Yuheng Zhi, Ryan K. Orosco, Michael C. Yip

Surgical robots offer the exciting potential for remote telesurgery, but advances are needed to make this technology efficient and accurate to ensure patient safety. Achieving these goals is hindered by the deleterious effects of latency between the remote operator and the bedside robot. Predictive displays have found success in overcoming these effects by giving the operator immediate visual feedback. However, previously developed predictive displays can not be directly applied to telesurgery due to the unique challenges in tracking the 3D geometry of the surgical environment. In this paper, we present the first predictive display for teleoperated surgical robots. The predicted display is stereoscopic, utilizes Augmented Reality (AR) to show the predicted motions alongside the complex tissue found in-situ within surgical environments, and overcomes the challenges in accurately tracking slave-tools in real-time. We call this a Stereoscopic AR Predictive Display (SARPD). To test the SARPD's performance, we conducted a user study with ten participants on the da Vinci\textregistered{} Surgical System. The results showed with statistical significance that using SARPD decreased time to complete task while having no effect on error rates when operating under delay.

* 7 pages, 8 Figures, Accepted ICRA 2019 

  Access Model/Code and Paper
SuPer: A Surgical Perception Framework for Endoscopic Tissue Manipulation with Surgical Robotics

Sep 11, 2019
Yang Li, Florian Richter, Jingpei Lu, Emily K. Funk, Ryan K. Orosco, Jianke Zhu, Michael C. Yip

Traditional control and task automation have been successfully demonstrated in a variety of structured, controlled environments through the use of highly specialized modeled robotic systems in conjunction with multiple sensors. However, application of autonomy in endoscopic surgery is very challenging, particularly in soft tissue work, due to the lack of high-quality images and the unpredictable, constantly deforming environment. In this work, we propose a novel surgical perception framework, SuPer, for surgical robotic control. This framework continuously collects 3D geometric information that allows for mapping of a deformable surgical field while tracking rigid instruments within the field. To achieve this, a model-based tracker is employed to localize the surgical tool with a kinematic prior in conjunction with a model-free tracker to reconstruct the deformable environment and provide an estimated point cloud as a mapping of the environment. The proposed framework was implemented on the da Vinci Surgical System in real-time with an end-effector controller where the target configurations are set and regulated through the framework. Our proposed framework successfully completed autonomous soft tissue manipulation tasks with high accuracy. The demonstration of this novel framework is promising for the future of surgical autonomy. In addition, we provide our dataset for further surgical research.

* The first two authors made equal contribution on this paper 

  Access Model/Code and Paper
ARCSnake: An Archimedes' Screw-Propelled, Reconfigurable Robot Snake for Complex Environments

Sep 25, 2019
Dimitri A. Schreiber, Florian Richter, Andrew Bilan, Peter V. Gavrilov, Casey H. Price, Kalind C. Carpenter, Michael C. Yip

This paper presents the design and performance of a screw-propelled redundant serpentine robot. This robot comprises serially linked, identical modules, each incorporating an Archimedes' screw for propulsion and a universal joint (U-Joint) for orientation control. When serially chained, these modules form a versatile snake robot platform which enables the robot to reshape its body configuration for varying environments and gait patterns that would be typical of snake movement. Furthermore, the Archimedes' screws allow for novel omni-wheel drive-like motions by speed controlling their screw threads. This paper considers the mechanical and electrical design, as well as the software architecture for realizing a fully integrated system. The system includes 3$N$ actuators for $N$ segments, each controlled using a BeagleBone Black with a customized power-electronics cape, a 9 Degrees of Freedom (DoF) Inertial Measurement Unit (IMU), and a scalable communication channel over ROS. The intended application for this robot is its use as an instrumentation mobility platform on terrestrial planets where the terrain may involve vents, caves, ice, and rocky surfaces. Additional experiments are shown on our website.

* 6 pages, 8 figures, submitted to ICRA 2020 

  Access Model/Code and Paper
Scalability in Neural Control of Musculoskeletal Robots

Jan 19, 2016
Christoph Richter, Sören Jentzsch, Rafael Hostettler, Jesús A. Garrido, Eduardo Ros, Alois C. Knoll, Florian Röhrbein, Patrick van der Smagt, Jörg Conradt

Anthropomimetic robots are robots that sense, behave, interact and feel like humans. By this definition, anthropomimetic robots require human-like physical hardware and actuation, but also brain-like control and sensing. The most self-evident realization to meet those requirements would be a human-like musculoskeletal robot with a brain-like neural controller. While both musculoskeletal robotic hardware and neural control software have existed for decades, a scalable approach that could be used to build and control an anthropomimetic human-scale robot has not been demonstrated yet. Combining Myorobotics, a framework for musculoskeletal robot development, with SpiNNaker, a neuromorphic computing platform, we present the proof-of-principle of a system that can scale to dozens of neurally-controlled, physically compliant joints. At its core, it implements a closed-loop cerebellar model which provides real-time low-level neural control at minimal power consumption and maximal extensibility: higher-order (e.g., cortical) neural networks and neuromorphic sensors like silicon-retinae or -cochleae can naturally be incorporated.

* Accepted at IEEE Robotics and Automation Magazine on 2015-12-31 

  Access Model/Code and Paper
mlr Tutorial

Sep 18, 2016
Julia Schiffner, Bernd Bischl, Michel Lang, Jakob Richter, Zachary M. Jones, Philipp Probst, Florian Pfisterer, Mason Gallo, Dominik Kirchhoff, Tobias Kühn, Janek Thomas, Lars Kotthoff

This document provides and in-depth introduction to the mlr framework for machine learning experiments in R.


  Access Model/Code and Paper