Models, code, and papers for "Yixiong Liang":

A Deep Gradient Boosting Network for Optic Disc and Cup Segmentation

Nov 05, 2019
Qing Liu, Beiji Zou, Yang Zhao, Yixiong Liang

Segmentation of optic disc (OD) and optic cup (OC) is critical in automated fundus image analysis system. Existing state-of-the-arts focus on designing deep neural networks with one or multiple dense prediction branches. Such kind of designs ignore connections among prediction branches and their learning capacity is limited. To build connections among prediction branches, this paper introduces gradient boosting framework to deep classification model and proposes a gradient boosting network called BoostNet. Specifically, deformable side-output unit and aggregation unit with deep supervisions are proposed to learn base functions and expansion coefficients in gradient boosting framework. By stacking aggregation units in a deep-to-shallow manner, models' performances are gradually boosted along deep to shallow stages. BoostNet achieves superior results to existing deep OD and OC segmentation networks on the public dataset ORIGA.


  Click for Model/Code and Paper
CNN-Based Automatic Urinary Particles Recognition

Mar 06, 2018
Rui Kang, Yixiong Liang, Chunyan Lian, Yuan Mao

The urine sediment analysis of particles in microscopic images can assist physicians in evaluating patients with renal and urinary tract diseases. Manual urine sediment examination is labor-intensive, subjective and time-consuming, and the traditional automatic algorithms often extract the hand-crafted features for recognition. Instead of using the hand-crafted features, in this paper, we exploit CNN to learn features in an end-to-end manner to recognize the urine particles. We treat the urine particles recognition as object detection and exploit two state-of-the-art CNN-based object detection methods, Faster R-CNN and SSD, as well as their variants for urine particles recognition. We further investigate different factors involving these CNN-based object detection methods for urine particles recognition. We comprehensively evaluate these methods on a dataset consisting of 5,376 annotated images corresponding to 7 categories of urine particles, i.e., erythrocyte, leukocyte, epithelial cell, crystal, cast, mycete, epithelial nuclei, and obtain a best mAP (mean average precision) of 84.1% while taking only 72 ms per image on a NVIDIA Titan X GPU.

* The manuscript has been submitted to Journal of Medical Systems on Jul 02. 2017 

  Click for Model/Code and Paper
Feature selection via simultaneous sparse approximation for person specific face verification

May 06, 2011
Yixiong Liang, Lei Wang, Shenghui Liao, Beiji Zou

There is an increasing use of some imperceivable and redundant local features for face recognition. While only a relatively small fraction of them is relevant to the final recognition task, the feature selection is a crucial and necessary step to select the most discriminant ones to obtain a compact face representation. In this paper, we investigate the sparsity-enforced regularization-based feature selection methods and propose a multi-task feature selection method for building person specific models for face verification. We assume that the person specific models share a common subset of features and novelly reformulated the common subset selection problem as a simultaneous sparse approximation problem. To the best of our knowledge, it is the first time to apply the sparsity-enforced regularization methods for person specific face verification. The effectiveness of the proposed methods is verified with the challenging LFW face databases.


  Click for Model/Code and Paper
Feature Selection via Sparse Approximation for Face Recognition

Feb 14, 2011
Yixiong Liang, Lei Wang, Yao Xiang, Beiji Zou

Inspired by biological vision systems, the over-complete local features with huge cardinality are increasingly used for face recognition during the last decades. Accordingly, feature selection has become more and more important and plays a critical role for face data description and recognition. In this paper, we propose a trainable feature selection algorithm based on the regularized frame for face recognition. By enforcing a sparsity penalty term on the minimum squared error (MSE) criterion, we cast the feature selection problem into a combinatorial sparse approximation problem, which can be solved by greedy methods or convex relaxation methods. Moreover, based on the same frame, we propose a sparse Ho-Kashyap (HK) procedure to obtain simultaneously the optimal sparse solution and the corresponding margin vector of the MSE criterion. The proposed methods are used for selecting the most informative Gabor features of face images for recognition and the experimental results on benchmark face databases demonstrate the effectiveness of the proposed methods.


  Click for Model/Code and Paper
Scale-Invariant Structure Saliency Selection for Fast Image Fusion

Oct 30, 2018
Yixiong Liang, Yuan Mao, Jiazhi Xia, Yao Xiang, Jianfeng Liu

In this paper, we present a fast yet effective method for pixel-level scale-invariant image fusion in spatial domain based on the scale-space theory. Specifically, we propose a scale-invariant structure saliency selection scheme based on the difference-of-Gaussian (DoG) pyramid of images to build the weights or activity map. Due to the scale-invariant structure saliency selection, our method can keep both details of small size objects and the integrity information of large size objects in images. In addition, our method is very efficient since there are no complex operation involved and easy to be implemented and therefore can be used for fast high resolution images fusion. Experimental results demonstrate the proposed method yields competitive or even better results comparing to state-of-the-art image fusion methods both in terms of visual quality and objective evaluation metrics. Furthermore, the proposed method is very fast and can be used to fuse the high resolution images in real-time. Code is available at https://github.com/yiqingmy/Fusion.


  Click for Model/Code and Paper
Comparison Detector: A novel object detection method for small dataset

Oct 14, 2018
Zhihong Tang, Yixiong Liang, Meng Yan, Jialin Chen, Jianfeng Liu

Though the object detection has shown great success when the training set is sufficient, there is a serious shortage of generalization in the small dataset scenario. However, we inevitably just get a small one in some application scenarios, especially medicine. In this paper, we propose Comparison detector which still maintains the end-to-end fashion in training and testing, surpassing the state-of-the-art two-stage object detection model on the small dataset. Inspired by one/few-shot learning, we replace the parameter classifier in feature pyramid network(FPN) with the comparison classifier in no-parameters or semi-parameters manner. In fact, a stronger inductive bias is added to the model to simplify the problem and reduce the dependence of data. The performance of our model is evaluated on the cervical cancer pathology test set. When training on the small dataset, it achieves a mAP 26.3% and an AR 35.7%, both improving about 20 points compared to baseline model. Moreover, Comparison detector achieves same mAP performance as the current state-of-the-art model when training on the medium dataset, and improves AR by 4 points. Our method is promising for the development of object detection in small dataset scenario.


  Click for Model/Code and Paper
Multi-task GLOH feature selection for human age estimation

May 06, 2011
Yixiong Liang, Lingbo Liu, Ying Xu, Yao Xiang, Beiji Zou

In this paper, we propose a novel age estimation method based on GLOH feature descriptor and multi-task learning (MTL). The GLOH feature descriptor, one of the state-of-the-art feature descriptor, is used to capture the age-related local and spatial information of face image. As the exacted GLOH features are often redundant, MTL is designed to select the most informative feature bins for age estimation problem, while the corresponding weights are determined by ridge regression. This approach largely reduces the dimensions of feature, which can not only improve performance but also decrease the computational burden. Experiments on the public available FG-NET database show that the proposed method can achieve comparable performance over previous approaches while using much fewer features.


  Click for Model/Code and Paper
A Novel Automation-Assisted Cervical Cancer Reading Method Based on Convolutional Neural Network

Dec 14, 2019
Yao Xiang, Wanxin Sun, Changli Pan, Meng Yan, Zhihua Yin, Yixiong Liang

While most previous automation-assisted reading methods can improve efficiency, their performance often relies on the success of accurate cell segmentation and hand-craft feature extraction. This paper presents an efficient and totally segmentation-free method for automated cervical cell screening that utilizes modern object detector to directly detect cervical cells or clumps, without the design of specific hand-crafted feature. Specifically, we use the state-of-the-art CNN-based object detection methods, YOLOv3, as our baseline model. In order to improve the classification performance of hard examples which are four highly similar categories, we cascade an additional task-specific classifier. We also investigate the presence of unreliable annotations and cope with them by smoothing the distribution of noisy labels. We comprehensively evaluate our methods on test set which is consisted of 1,014 annotated cervical cell images with size of 4000*3000 and complex cellular situation corresponding to 10 categories. Our model achieves 97.5% sensitivity (Sens) and 67.8% specificity (Spec) on cervical cell image-level screening. Moreover, we obtain a mean Average Precision (mAP) of 63.4% on cervical cell-level diagnosis, and improve the Average Precision (AP) of hard examples which are valuable but difficult to distinguish. Our automation-assisted cervical cell reading method not only achieves cervical cell image-level classification but also provides more detailed location and category information of abnormal cells. The results indicate feasible performance of our method, together with the efficiency and robustness, providing a new idea for future development of computer-assisted reading system in clinical cervical screening.


  Click for Model/Code and Paper
Dual-attention Focused Module for Weakly Supervised Object Localization

Sep 11, 2019
Yukun Zhou, Zailiang Chen, Hailan Shen, Qing Liu, Rongchang Zhao, Yixiong Liang

The research on recognizing the most discriminative regions provides referential information for weakly supervised object localization with only image-level annotations. However, the most discriminative regions usually conceal the other parts of the object, thereby impeding entire object recognition and localization. To tackle this problem, the Dual-attention Focused Module (DFM) is proposed to enhance object localization performance. Specifically, we present a dual attention module for information fusion, consisting of a position branch and a channel one. In each branch, the input feature map is deduced into an enhancement map and a mask map, thereby highlighting the most discriminative parts or hiding them. For the position mask map, we introduce a focused matrix to enhance it, which utilizes the principle that the pixels of an object are continuous. Between these two branches, the enhancement map is integrated with the mask map, aiming at partially compensating the lost information and diversifies the features. With the dual-attention module and focused matrix, the entire object region could be precisely recognized with implicit information. We demonstrate outperforming results of DFM in experiments. In particular, DFM achieves state-of-the-art performance in localization accuracy in ILSVRC 2016 and CUB-200-2011.

* 8 pages, 6 figures and 4 tables 

  Click for Model/Code and Paper
Efficient Misalignment-Robust Multi-Focus Microscopical Images Fusion

Dec 21, 2018
Yixiong Liang, Yuan Mao, Zhihong Tang, Meng Yan, Yuqian Zhao, Jianfeng Liu

In this paper we propose a very efficient method to fuse the unregistered multi-focus microscopical images based on the speed-up robust features (SURF). Our method follows the pipeline of first registration and then fusion. However, instead of treating the registration and fusion as two completely independent stage, we propose to reuse the determinant of the approximate Hessian generated in SURF detection stage as the corresponding salient response for the final image fusion, thus it enables nearly cost-free saliency map generation. In addition, due to the adoption of SURF scale space representation, our method can generate scale-invariant saliency map which is desired for scale-invariant image fusion. We present an extensive evaluation on the dataset consisting of several groups of unregistered multi-focus 4K ultra HD microscopic images with size of 4112 x 3008. Compared with the state-of-the-art multi-focus image fusion methods, our method is much faster and achieve better results in the visual performance. Our method provides a flexible and efficient way to integrate complementary and redundant information from multiple multi-focus ultra HD unregistered images into a fused image that contains better description than any of the individual input images. Code is available at https://github.com/yiqingmy/JointRF.

* 14 pages,11 figures 

  Click for Model/Code and Paper