A vision system attached to a manipulator excels at tracing a moving target object while effectively handling obstacles, overcoming limitations arising from the camera's confined field of view and occluded line of sight. Meanwhile, the manipulator may encounter certain challenges, including restricted motion due to kinematic constraints and the risk of colliding with external obstacles. These challenges are typically addressed by assigning multiple task objectives to the manipulator. However, doing so can cause an increased risk of driving the manipulator to its kinematic limits, leading to failures in object tracking or obstacle avoidance. To address this issue, we propose a novel visual tracking control method for a redundant manipulator that takes the kinematic constraints into account via a reachability measure. Our method employs an optimization-based controller that considers object tracking, occlusion avoidance, collision avoidance, and the kinematic constraints represented by the reachability measure. Subsequently, it determines a suitable joint configuration through real-time inverse kinematics, accounting for dynamic obstacle avoidance and the continuity of joint configurations. To validate our approach, we conducted simulations and hardware experiments involving a moving target and dynamic obstacles. The results of our evaluations highlight the significance of incorporating the reachability measure.
Concentric Tube Robots (CTRs) have been proposed to operate within the unstructured environment for minimally invasive surgeries. In this letter, we consider the operation scenario where the tubes travel inside the channels with a large clearance or large curvature, such as aortas or industrial pipes. Accurate kinematic modeling of CTRs is required for the development of advanced control and sensing algorithms. To this end, we extended the conventional CTR kinematics model to a more general case with large tube-to-tube clearance and large centerline curvature. Numerical simulations and experimental validations are conducted to compare our model with respect to the conventional CTR kinematic model. In the physical experiments, our proposed model achieved a tip position error of 1.53 mm in the 2D planer case and 4.36 mm in 3D case, outperforming the state-of-the-art model by 71% and 66%, respectively.