Models, code, and papers for "Lingyun Chen":

Mechatronic Design of a Dribbling System for RoboCup Small Size Robot

May 24, 2019
Zheyuan Huang, Yunkai Wang, Lingyun Chen, Jiacheng Li, Zexi Chen, Rong Xiong

RoboCup SSL is an excellent platform for researching artificial intelligence and robotics. The dribbling system is an essential issue, which is the main part for completing advanced soccer skills such as trapping and dribbling. In this paper, we designed a new dribbling system for SSL robots, including mechatronics design and control algorithms. For the mechatronics design, we analysed and exposed the 3-touch-point model with the simulation in ADAMS. In the motor controller algorithm, we use reinforcement learning to control the torque output. Finally we verified the results on the robot.

* RCAR 2019. arXiv admin note: substantial text overlap with arXiv:1905.09157 

  Click for Model/Code and Paper
Cycle-Consistent Adversarial GAN: the integration of adversarial attack and defense

Apr 12, 2019
Lingyun Jiang, Kai Qiao, Ruoxi Qin, Linyuan Wang, Jian Chen, Haibing Bu, Bin Yan

In image classification of deep learning, adversarial examples where inputs intended to add small magnitude perturbations may mislead deep neural networks (DNNs) to incorrect results, which means DNNs are vulnerable to them. Different attack and defense strategies have been proposed to better research the mechanism of deep learning. However, those research in these networks are only for one aspect, either an attack or a defense, not considering that attacks and defenses should be interdependent and mutually reinforcing, just like the relationship between spears and shields. In this paper, we propose Cycle-Consistent Adversarial GAN (CycleAdvGAN) to generate adversarial examples, which can learn and approximate the distribution of original instances and adversarial examples. For CycleAdvGAN, once the Generator and are trained, can generate adversarial perturbations efficiently for any instance, so as to make DNNs predict wrong, and recovery adversarial examples to clean instances, so as to make DNNs predict correct. We apply CycleAdvGAN under semi-white box and black-box settings on two public datasets MNIST and CIFAR10. Using the extensive experiments, we show that our method has achieved the state-of-the-art adversarial attack method and also efficiently improve the defense ability, which make the integration of adversarial attack and defense come true. In additional, it has improved attack effect only trained on the adversarial dataset generated by any kind of adversarial attack.

* 13 pages,7 tables, 1 figure 

  Click for Model/Code and Paper
A Multi-Domain Feature Learning Method for Visual Place Recognition

Feb 26, 2019
Peng Yin, Lingyun Xu, Xueqian Li, Chen Yin, Yingli Li, Rangaprasad Arun Srivatsan, Lu Li, Jianmin Ji, Yuqing He

Visual Place Recognition (VPR) is an important component in both computer vision and robotics applications, thanks to its ability to determine whether a place has been visited and where specifically. A major challenge in VPR is to handle changes of environmental conditions including weather, season and illumination. Most VPR methods try to improve the place recognition performance by ignoring the environmental factors, leading to decreased accuracy decreases when environmental conditions change significantly, such as day versus night. To this end, we propose an end-to-end conditional visual place recognition method. Specifically, we introduce the multi-domain feature learning method (MDFL) to capture multiple attribute-descriptions for a given place, and then use a feature detaching module to separate the environmental condition-related features from those that are not. The only label required within this feature learning pipeline is the environmental condition. Evaluation of the proposed method is conducted on the multi-season \textit{NORDLAND} dataset, and the multi-weather \textit{GTAV} dataset. Experimental results show that our method improves the feature robustness against variant environmental conditions.

* 6 pages, 5 figures, ICRA 2019 accepted 

  Click for Model/Code and Paper
ZJUNlict Extended Team Description Paper for RoboCup 2019

May 22, 2019
Zheyuan Huang, Lingyun Chen, Jiacheng Li, Yunkai Wang, Zexi Chen, Licheng Wen, Jianyang Gu, Peng Hu, Rong Xiong

For the Small Size League of RoboCup 2018, Team ZJUNLict has won the champion and therefore, this paper thoroughly described the devotion which ZJUNLict has devoted and the effort that ZJUNLict has contributed. There are three mean optimizations for the mechanical part which accounted for most of our incredible goals, they are "Touching Point Optimization", "Damping System Optimization", and "Dribbler Optimization". For the electrical part, we realized "Direct Torque Control", "Efficient Radio Communication Protocol" which will be credited for stabilizing the dribbler and a more secure communication between robots and the computer. Our software group contributed as much as our hardware group with the effort of "Vision Lost Compensation" to predict the movement by kalman filter, and "Interception Prediction Algorithm" to achieve some skills and improve our ball possession rate.

* ZJUNlict Extended Team Description Paper for RoboCup 2019 Small Size League 

  Click for Model/Code and Paper
MRS-VPR: a multi-resolution sampling based global visual place recognition method

Feb 26, 2019
Peng Yin, Rangaprasad Arun Srivatsan, Yin Chen, Xueqian Li, Hongda Zhang, Lingyun Xu, Lu Li, Zhenzhong Jia, Jianmin Ji, Yuqing He

Place recognition and loop closure detection are challenging for long-term visual navigation tasks. SeqSLAM is considered to be one of the most successful approaches to achieving long-term localization under varying environmental conditions and changing viewpoints. It depends on a brute-force, time-consuming sequential matching method. We propose MRS-VPR, a multi-resolution, sampling-based place recognition method, which can significantly improve the matching efficiency and accuracy in sequential matching. The novelty of this method lies in the coarse-to-fine searching pipeline and a particle filter-based global sampling scheme, that can balance the matching efficiency and accuracy in the long-term navigation task. Moreover, our model works much better than SeqSLAM when the testing sequence has a much smaller scale than the reference sequence. Our experiments demonstrate that the proposed method is efficient in locating short temporary trajectories within long-term reference ones without losing accuracy compared to SeqSLAM.

* 6 pages, 5 figures, ICRA 2019, accepted 

  Click for Model/Code and Paper
Champion Team Paper: Dynamic Passing-Shooting Algorithm Based on CUDA of The RoboCup SSL 2019 Champion

Sep 17, 2019
Zexi Chen, Haodong Zhang, Dashun Guo, Shenhan Jia, Xianze Fang, Zheyuan Huang, Yunkai Wang, Peng Hu, Licheng Wen, Lingyun Chen, Zhengxi Li, Rong Xiong

ZJUNlict became the Small Size League Champion of RoboCup 2019 with 6 victories and 1 tie for their 7 games. The overwhelming ability of ball-handling and passing allows ZJUNlict to greatly threaten its opponent and almost kept its goal clear without being threatened. This paper presents the core technology of its ball-handling and robot movement which consist of hardware optimization, dynamic passing and shooting strategy, and multi-agent cooperation and formation. We first describe the mechanical optimization on the placement of the capacitors, the redesign of the damping system of the dribbler and the electrical optimization on the replacement of the core chip. We then describe our passing point algorithm. The passing and shooting strategy can be separated into two different parts, where we search the passing point on SBIP-DPPS and evaluate the point based on the ball model. The statements and the conclusion should be supported by the performances and log of games on Small Size League RoboCup 2019.

* RoboCup SSL 2019 Champion Paper 

  Click for Model/Code and Paper
MaskGAN: Towards Diverse and Interactive Facial Image Manipulation

Jul 27, 2019
Cheng-Han Lee, Ziwei Liu, Lingyun Wu, Ping Luo

Facial image manipulation has achieved great progresses in recent years. However, previous methods either operate on a predefined set of face attributes or leave users little freedom to interactively manipulate images. To overcome these drawbacks, we propose a novel framework termed MaskGAN, enabling diverse and interactive face manipulation. Our key insight is that semantic masks serve as a suitable intermediate representation for flexible face manipulation with fidelity preservation. MaskGAN has two main components: 1) Dense Mapping Network, and 2) Editing Behavior Simulated Training. Specifically, Dense mapping network learns style mapping between a free-form user modified mask and a target image, enabling diverse generation results. Editing behavior simulated training models the user editing behavior on the source mask, making the overall framework more robust to various manipulated inputs. To facilitate extensive studies, we construct a large-scale high-resolution face dataset with fine-grained mask annotations named CelebAMask-HQ. MaskGAN is comprehensively evaluated on two challenging tasks: attribute transfer and style copy, demonstrating superior performance over other state-of-the-art methods. The code, models and dataset are available at \url{}.

* High-resolution face parsing dataset as well as new face manipulation model. The code, models and dataset are available at \url{} 

  Click for Model/Code and Paper
LFZip: Lossy compression of multivariate floating-point time series data via improved prediction

Nov 01, 2019
Shubham Chandak, Kedar Tatwawadi, Chengtao Wen, Lingyun Wang, Juan Aparicio, Tsachy Weissman

Time series data compression is emerging as an important problem with the growth in IoT devices and sensors. Due to the presence of noise in these datasets, lossy compression can often provide significant compression gains without impacting the performance of downstream applications. In this work, we propose an error-bounded lossy compressor, LFZip, for multivariate floating-point time series data that provides guaranteed reconstruction up to user-specified maximum absolute error. The compressor is based on the prediction-quantization-entropy coder framework and benefits from improved prediction using linear models and neural networks. We evaluate the compressor on several time series datasets where it outperforms the existing state-of-the-art error-bounded lossy compressors. The code and data are available at

  Click for Model/Code and Paper