Models, code, and papers for "Hanzi Wang":

Learning Hough Regression Models via Bridge Partial Least Squares for Object Detection

Mar 26, 2016
Jianyu Tang, Hanzi Wang, Yan Yan

Popular Hough Transform-based object detection approaches usually construct an appearance codebook by clustering local image features. However, how to choose appropriate values for the parameters used in the clustering step remains an open problem. Moreover, some popular histogram features extracted from overlapping image blocks may cause a high degree of redundancy and multicollinearity. In this paper, we propose a novel Hough Transform-based object detection approach. First, to address the above issues, we exploit a Bridge Partial Least Squares (BPLS) technique to establish context-encoded Hough Regression Models (HRMs), which are linear regression models that cast probabilistic Hough votes to predict object locations. BPLS is an efficient variant of Partial Least Squares (PLS). PLS-based regression techniques (including BPLS) can reduce the redundancy and eliminate the multicollinearity of a feature set. And the appropriate value of the only parameter used in PLS (i.e., the number of latent components) can be determined by using a cross-validation procedure. Second, to efficiently handle object scale changes, we propose a novel multi-scale voting scheme. In this scheme, multiple Hough images corresponding to multiple object scales can be obtained simultaneously. Third, an object in a test image may correspond to multiple true and false positive hypotheses at different scales. Based on the proposed multi-scale voting scheme, a principled strategy is proposed to fuse hypotheses to reduce false positives by evaluating normalized pointwise mutual information between hypotheses. In the experiments, we also compare the proposed HRM approach with its several variants to evaluate the influences of its components on its performance. Experimental results show that the proposed HRM approach has achieved desirable performances on popular benchmark datasets.

* Neurocomputing, 2015,152(3):236-249 

  Access Model/Code and Paper
Multi-Subregion Based Correlation Filter Bank for Robust Face Recognition

Mar 24, 2016
Yan Yan, Hanzi Wang, David Suter

In this paper, we propose an effective feature extraction algorithm, called Multi-Subregion based Correlation Filter Bank (MS-CFB), for robust face recognition. MS-CFB combines the benefits of global-based and local-based feature extraction algorithms, where multiple correlation filters correspond- ing to different face subregions are jointly designed to optimize the overall correlation outputs. Furthermore, we reduce the computational complexi- ty of MS-CFB by designing the correlation filter bank in the spatial domain and improve its generalization capability by capitalizing on the unconstrained form during the filter bank design process. MS-CFB not only takes the d- ifferences among face subregions into account, but also effectively exploits the discriminative information in face subregions. Experimental results on various public face databases demonstrate that the proposed algorithm pro- vides a better feature representation for classification and achieves higher recognition rates compared with several state-of-the-art algorithms.

* Pattern Recognition, volume 47, 11, pages3487--3501, (2014) 

  Access Model/Code and Paper
Efficient Semidefinite Spectral Clustering via Lagrange Duality

Feb 22, 2014
Yan Yan, Chunhua Shen, Hanzi Wang

We propose an efficient approach to semidefinite spectral clustering (SSC), which addresses the Frobenius normalization with the positive semidefinite (p.s.d.) constraint for spectral clustering. Compared with the original Frobenius norm approximation based algorithm, the proposed algorithm can more accurately find the closest doubly stochastic approximation to the affinity matrix by considering the p.s.d. constraint. In this paper, SSC is formulated as a semidefinite programming (SDP) problem. In order to solve the high computational complexity of SDP, we present a dual algorithm based on the Lagrange dual formalization. Two versions of the proposed algorithm are proffered: one with less memory usage and the other with faster convergence rate. The proposed algorithm has much lower time complexity than that of the standard interior-point based SDP solvers. Experimental results on both UCI data sets and real-world image data sets demonstrate that 1) compared with the state-of-the-art spectral clustering methods, the proposed algorithm achieves better clustering performance; and 2) our algorithm is much more efficient and can solve larger-scale SSC problems than those standard interior-point SDP solvers.

* 13 pages 

  Access Model/Code and Paper
Generalized Kernel-based Visual Tracking

Jun 07, 2009
Chunhua Shen, Junae Kim, Hanzi Wang

In this work we generalize the plain MS trackers and attempt to overcome standard mean shift trackers' two limitations. It is well known that modeling and maintaining a representation of a target object is an important component of a successful visual tracker. However, little work has been done on building a robust template model for kernel-based MS tracking. In contrast to building a template from a single frame, we train a robust object representation model from a large amount of data. Tracking is viewed as a binary classification problem, and a discriminative classification rule is learned to distinguish between the object and background. We adopt a support vector machine (SVM) for training. The tracker is then implemented by maximizing the classification score. An iterative optimization scheme very similar to MS is derived for this purpose.

* IEEE Transactions on Circuits and Systems for Video Technology, 2010 
* 12 pages 

  Access Model/Code and Paper
Real-Time High-Performance Semantic Image Segmentation of Urban Street Scenes

Apr 03, 2020
Genshun Dong, Yan Yan, Chunhua Shen, Hanzi Wang

Deep Convolutional Neural Networks (DCNNs) have recently shown outstanding performance in semantic image segmentation. However, state-of-the-art DCNN-based semantic segmentation methods usually suffer from high computational complexity due to the use of complex network architectures. This greatly limits their applications in the real-world scenarios that require real-time processing. In this paper, we propose a real-time high-performance DCNN-based method for robust semantic segmentation of urban street scenes, which achieves a good trade-off between accuracy and speed. Specifically, a Lightweight Baseline Network with Atrous convolution and Attention (LBN-AA) is firstly used as our baseline network to efficiently obtain dense feature maps. Then, the Distinctive Atrous Spatial Pyramid Pooling (DASPP), which exploits the different sizes of pooling operations to encode the rich and distinctive semantic information, is developed to detect objects at multiple scales. Meanwhile, a Spatial detail-Preserving Network (SPN) with shallow convolutional layers is designed to generate high-resolution feature maps preserving the detailed spatial information. Finally, a simple but practical Feature Fusion Network (FFN) is used to effectively combine both shallow and deep features from the semantic branch (DASPP) and the spatial branch (SPN), respectively. Extensive experimental results show that the proposed method respectively achieves the accuracy of 73.6% and 68.0% mean Intersection over Union (mIoU) with the inference speed of 51.0 fps and 39.3 fps on the challenging Cityscapes and CamVid test datasets (by only using a single NVIDIA TITAN X card). This demonstrates that the proposed method offers excellent performance at the real-time speed for semantic segmentation of urban street scenes.


  Access Model/Code and Paper
Learning Object Scale With Click Supervision for Object Detection

Feb 20, 2020
Liao Zhang, Yan Yan, Lin Cheng, Hanzi Wang

Weakly-supervised object detection has recently attracted increasing attention since it only requires image-levelannotations. However, the performance obtained by existingmethods is still far from being satisfactory compared with fully-supervised object detection methods. To achieve a good trade-off between annotation cost and object detection performance,we propose a simple yet effective method which incorporatesCNN visualization with click supervision to generate the pseudoground-truths (i.e., bounding boxes). These pseudo ground-truthscan be used to train a fully-supervised detector. To estimatethe object scale, we firstly adopt a proposal selection algorithmto preserve high-quality proposals, and then generate ClassActivation Maps (CAMs) for these preserved proposals by theproposed CNN visualization algorithm called Spatial AttentionCAM. Finally, we fuse these CAMs together to generate pseudoground-truths and train a fully-supervised object detector withthese ground-truths. Experimental results on the PASCAL VOC2007 and VOC 2012 datasets show that the proposed methodcan obtain much higher accuracy for estimating the object scale,compared with the state-of-the-art image-level based methodsand the center-click based method


  Access Model/Code and Paper
Multi-task Learning of Cascaded CNN for Facial Attribute Classification

May 03, 2018
Ni Zhuang, Yan Yan, Si Chen, Hanzi Wang

Recently, facial attribute classification (FAC) has attracted significant attention in the computer vision community. Great progress has been made along with the availability of challenging FAC datasets. However, conventional FAC methods usually firstly pre-process the input images (i.e., perform face detection and alignment) and then predict facial attributes. These methods ignore the inherent dependencies among these tasks (i.e., face detection, facial landmark localization and FAC). Moreover, some methods using convolutional neural network are trained based on the fixed loss weights without considering the differences between facial attributes. In order to address the above problems, we propose a novel multi-task learning of cas- caded convolutional neural network method, termed MCFA, for predicting multiple facial attributes simultaneously. Specifically, the proposed method takes advantage of three cascaded sub-networks (i.e., S_Net, M_Net and L_Net corresponding to the neural networks under different scales) to jointly train multiple tasks in a coarse-to-fine manner, which can achieve end-to-end optimization. Furthermore, the proposed method automatically assigns the loss weight to each facial attribute based on a novel dynamic weighting scheme, thus making the proposed method concentrate on predicting the more difficult facial attributes. Experimental results show that the proposed method outperforms several state-of-the-art FAC methods on the challenging CelebA and LFWA datasets.


  Access Model/Code and Paper
Superpixel-guided Two-view Deterministic Geometric Model Fitting

May 03, 2018
Guobao Xiao, Hanzi Wang, Yan Yan, David Suter

Geometric model fitting is a fundamental research topic in computer vision and it aims to fit and segment multiple-structure data. In this paper, we propose a novel superpixel-guided two-view geometric model fitting method (called SDF), which can obtain reliable and consistent results for real images. Specifically, SDF includes three main parts: a deterministic sampling algorithm, a model hypothesis updating strategy and a novel model selection algorithm. The proposed deterministic sampling algorithm generates a set of initial model hypotheses according to the prior information of superpixels. Then the proposed updating strategy further improves the quality of model hypotheses. After that, by analyzing the properties of the updated model hypotheses, the proposed model selection algorithm extends the conventional "fit-and-remove" framework to estimate model instances in multiple-structure data. The three parts are tightly coupled to boost the performance of SDF in both speed and accuracy, and SDF has the deterministic nature. Experimental results show that the proposed SDF has significant advantages over several state-of-the-art fitting methods when it is applied to real images with single-structure and multiple-structure data.

* International Journal of Computer Vision (IJCV),2018 

  Access Model/Code and Paper
Searching for Representative Modes on Hypergraphs for Robust Geometric Model Fitting

Feb 04, 2018
Hanzi Wang, Guobao Xiao, Yan Yan, David Suter

In this paper, we propose a simple and effective {geometric} model fitting method to fit and segment multi-structure data even in the presence of severe outliers. We cast the task of geometric model fitting as a representative mode-seeking problem on hypergraphs. Specifically, a hypergraph is firstly constructed, where the vertices represent model hypotheses and the hyperedges denote data points. The hypergraph involves higher-order similarities (instead of pairwise similarities used on a simple graph), and it can characterize complex relationships between model hypotheses and data points. {In addition, we develop a hypergraph reduction technique to remove "insignificant" vertices while retaining as many "significant" vertices as possible in the hypergraph}. Based on the {simplified hypergraph, we then propose a novel mode-seeking algorithm to search for representative modes within reasonable time. Finally, the} proposed mode-seeking algorithm detects modes according to two key elements, i.e., the weighting scores of vertices and the similarity analysis between vertices. Overall, the proposed fitting method is able to efficiently and effectively estimate the number and the parameters of model instances in the data simultaneously. Experimental results demonstrate that the proposed method achieves significant superiority over {several} state-of-the-art model fitting methods on both synthetic data and real images.

* IEEE Transactions on Pattern Analysis and Machine Intelligence,2018 

  Access Model/Code and Paper
Superpixel-based Two-view Deterministic Fitting for Multiple-structure Data

Jul 20, 2016
Guobao Xiao, Hanzi Wang, Yan Yan, David Suter

This paper proposes a two-view deterministic geometric model fitting method, termed Superpixel-based Deterministic Fitting (SDF), for multiple-structure data. SDF starts from superpixel segmentation, which effectively captures prior information of feature appearances. The feature appearances are beneficial to reduce the computational complexity for deterministic fitting methods. SDF also includes two original elements, i.e., a deterministic sampling algorithm and a novel model selection algorithm. The two algorithms are tightly coupled to boost the performance of SDF in both speed and accuracy. Specifically, the proposed sampling algorithm leverages the grouping cues of superpixels to generate reliable and consistent hypotheses. The proposed model selection algorithm further makes use of desirable properties of the generated hypotheses, to improve the conventional fit-and-remove framework for more efficient and effective performance. The key characteristic of SDF is that it can efficiently and deterministically estimate the parameters of model instances in multi-structure data. Experimental results demonstrate that the proposed SDF shows superiority over several state-of-the-art fitting methods for real images with single-structure and multiple-structure data.

* Accepted by European Conference on Computer Vision (ECCV) 

  Access Model/Code and Paper
Mode-Seeking on Hypergraphs for Robust Geometric Model Fitting

Mar 25, 2016
Hanzi Wang, Guobao Xiao, Yan Yan, David Suter

In this paper, we propose a novel geometric model fitting method, called Mode-Seeking on Hypergraphs (MSH),to deal with multi-structure data even in the presence of severe outliers. The proposed method formulates geometric model fitting as a mode seeking problem on a hypergraph in which vertices represent model hypotheses and hyperedges denote data points. MSH intuitively detects model instances by a simple and effective mode seeking algorithm. In addition to the mode seeking algorithm, MSH includes a similarity measure between vertices on the hypergraph and a weight-aware sampling technique. The proposed method not only alleviates sensitivity to the data distribution, but also is scalable to large scale problems. Experimental results further demonstrate that the proposed method has significant superiority over the state-of-the-art fitting methods on both synthetic data and real images.

* Proceedings of the IEEE International Conference on Computer Vision, pp. 2902-2910, 2015 

  Access Model/Code and Paper
End-to-end Learning of Object Motion Estimation from Retinal Events for Event-based Object Tracking

Feb 14, 2020
Haosheng Chen, David Suter, Qiangqiang Wu, Hanzi Wang

Event cameras, which are asynchronous bio-inspired vision sensors, have shown great potential in computer vision and artificial intelligence. However, the application of event cameras to object-level motion estimation or tracking is still in its infancy. The main idea behind this work is to propose a novel deep neural network to learn and regress a parametric object-level motion/transform model for event-based object tracking. To achieve this goal, we propose a synchronous Time-Surface with Linear Time Decay (TSLTD) representation, which effectively encodes the spatio-temporal information of asynchronous retinal events into TSLTD frames with clear motion patterns. We feed the sequence of TSLTD frames to a novel Retinal Motion Regression Network (RMRNet) to perform an end-to-end 5-DoF object motion regression. Our method is compared with state-of-the-art object tracking methods, that are based on conventional cameras or event cameras. The experimental results show the superiority of our method in handling various challenging environments such as fast motion and low illumination conditions.

* 9 pages, 3 figures 

  Access Model/Code and Paper
Deep Multi-task Multi-label CNN for Effective Facial Attribute Classification

Feb 10, 2020
Longbiao Mao, Yan Yan, Jing-Hao Xue, Hanzi Wang

Facial Attribute Classification (FAC) has attracted increasing attention in computer vision and pattern recognition. However, state-of-the-art FAC methods perform face detection/alignment and FAC independently. The inherent dependencies between these tasks are not fully exploited. In addition, most methods predict all facial attributes using the same CNN network architecture, which ignores the different learning complexities of facial attributes. To address the above problems, we propose a novel deep multi-task multi-label CNN, termed DMM-CNN, for effective FAC. Specifically, DMM-CNN jointly optimizes two closely-related tasks (i.e., facial landmark detection and FAC) to improve the performance of FAC by taking advantage of multi-task learning. To deal with the diverse learning complexities of facial attributes, we divide the attributes into two groups: objective attributes and subjective attributes. Two different network architectures are respectively designed to extract features for two groups of attributes, and a novel dynamic weighting scheme is proposed to automatically assign the loss weight to each facial attribute during training. Furthermore, an adaptive thresholding strategy is developed to effectively alleviate the problem of class imbalance for multi-label learning. Experimental results on the challenging CelebA and LFWA datasets show the superiority of the proposed DMM-CNN method compared with several state-of-the-art FAC methods.


  Access Model/Code and Paper
Hypergraph Modelling for Geometric Model Fitting

Jul 11, 2016
Guobao Xiao, Hanzi Wang, Taotao Lai, David Suter

In this paper, we propose a novel hypergraph based method (called HF) to fit and segment multi-structural data. The proposed HF formulates the geometric model fitting problem as a hypergraph partition problem based on a novel hypergraph model. In the hypergraph model, vertices represent data points and hyperedges denote model hypotheses. The hypergraph, with large and "data-determined" degrees of hyperedges, can express the complex relationships between model hypotheses and data points. In addition, we develop a robust hypergraph partition algorithm to detect sub-hypergraphs for model fitting. HF can effectively and efficiently estimate the number of, and the parameters of, model instances in multi-structural data heavily corrupted with outliers simultaneously. Experimental results show the advantages of the proposed method over previous methods on both synthetic data and real images.

* Pattern Recognition, 2016 

  Access Model/Code and Paper
Hypergraph Optimization for Multi-structural Geometric Model Fitting

Feb 13, 2020
Shuyuan Lin, Guobao Xiao, Yan Yan, David Suter, Hanzi Wang

Recently, some hypergraph-based methods have been proposed to deal with the problem of model fitting in computer vision, mainly due to the superior capability of hypergraph to represent the complex relationship between data points. However, a hypergraph becomes extremely complicated when the input data include a large number of data points (usually contaminated with noises and outliers), which will significantly increase the computational burden. In order to overcome the above problem, we propose a novel hypergraph optimization based model fitting (HOMF) method to construct a simple but effective hypergraph. Specifically, HOMF includes two main parts: an adaptive inlier estimation algorithm for vertex optimization and an iterative hyperedge optimization algorithm for hyperedge optimization. The proposed method is highly efficient, and it can obtain accurate model fitting results within a few iterations. Moreover, HOMF can then directly apply spectral clustering, to achieve good fitting performance. Extensive experimental results show that HOMF outperforms several state-of-the-art model fitting methods on both synthetic data and real images, especially in sampling efficiency and in handling data with severe outliers.


  Access Model/Code and Paper
Joint Deep Learning of Facial Expression Synthesis and Recognition

Feb 06, 2020
Yan Yan, Ying Huang, Si Chen, Chunhua Shen, Hanzi Wang

Recently, deep learning based facial expression recognition (FER) methods have attracted considerable attention and they usually require large-scale labelled training data. Nonetheless, the publicly available facial expression databases typically contain a small amount of labelled data. In this paper, to overcome the above issue, we propose a novel joint deep learning of facial expression synthesis and recognition method for effective FER. More specifically, the proposed method involves a two-stage learning procedure. Firstly, a facial expression synthesis generative adversarial network (FESGAN) is pre-trained to generate facial images with different facial expressions. To increase the diversity of the training images, FESGAN is elaborately designed to generate images with new identities from a prior distribution. Secondly, an expression recognition network is jointly learned with the pre-trained FESGAN in a unified framework. In particular, the classification loss computed from the recognition network is used to simultaneously optimize the performance of both the recognition network and the generator of FESGAN. Moreover, in order to alleviate the problem of data bias between the real images and the synthetic images, we propose an intra-class loss with a novel real data-guided back-propagation (RDBP) algorithm to reduce the intra-class variations of images from the same class, which can significantly improve the final performance. Extensive experimental results on public facial expression databases demonstrate the superiority of the proposed method compared with several state-of-the-art FER methods.


  Access Model/Code and Paper
DSNet: Deep and Shallow Feature Learning for Efficient Visual Tracking

Nov 06, 2018
Qiangqiang Wu, Yan Yan, Yanjie Liang, Yi Liu, Hanzi Wang

In recent years, Discriminative Correlation Filter (DCF) based tracking methods have achieved great success in visual tracking. However, the multi-resolution convolutional feature maps trained from other tasks like image classification, cannot be naturally used in the conventional DCF formulation. Furthermore, these high-dimensional feature maps significantly increase the tracking complexity and thus limit the tracking speed. In this paper, we present a deep and shallow feature learning network, namely DSNet, to learn the multi-level same-resolution compressed (MSC) features for efficient online tracking, in an end-to-end offline manner. Specifically, the proposed DSNet compresses multi-level convolutional features to uniform spatial resolution features. The learned MSC features effectively encode both appearance and semantic information of objects in the same-resolution feature maps, thus enabling an elegant combination of the MSC features with any DCF-based methods. Additionally, a channel reliability measurement (CRM) method is presented to further refine the learned MSC features. We demonstrate the effectiveness of the MSC features learned from the proposed DSNet on two DCF tracking frameworks: the basic DCF framework and the continuous convolution operator framework. Extensive experiments show that the learned MSC features have the appealing advantage of allowing the equipped DCF-based tracking methods to perform favorably against the state-of-the-art methods while running at high frame rates.

* To appear at ACCV 2018. 14 pages, 8 figures 

  Access Model/Code and Paper
Multi-label Learning Based Deep Transfer Neural Network for Facial Attribute Classification

May 03, 2018
Ni Zhuang, Yan Yan, Si Chen, Hanzi Wang, Chunhua Shen

Deep Neural Network (DNN) has recently achieved outstanding performance in a variety of computer vision tasks, including facial attribute classification. The great success of classifying facial attributes with DNN often relies on a massive amount of labelled data. However, in real-world applications, labelled data are only provided for some commonly used attributes (such as age, gender); whereas, unlabelled data are available for other attributes (such as attraction, hairline). To address the above problem, we propose a novel deep transfer neural network method based on multi-label learning for facial attribute classification, termed FMTNet, which consists of three sub-networks: the Face detection Network (FNet), the Multi-label learning Network (MNet) and the Transfer learning Network (TNet). Firstly, based on the Faster Region-based Convolutional Neural Network (Faster R-CNN), FNet is fine-tuned for face detection. Then, MNet is fine-tuned by FNet to predict multiple attributes with labelled data, where an effective loss weight scheme is developed to explicitly exploit the correlation between facial attributes based on attribute grouping. Finally, based on MNet, TNet is trained by taking advantage of unsupervised domain adaptation for unlabelled facial attribute classification. The three sub-networks are tightly coupled to perform effective facial attribute classification. A distinguishing characteristic of the proposed FMTNet method is that the three sub-networks (FNet, MNet and TNet) are constructed in a similar network structure. Extensive experimental results on challenging face datasets demonstrate the effectiveness of our proposed method compared with several state-of-the-art methods.


  Access Model/Code and Paper
A Fast Face Detection Method via Convolutional Neural Network

Mar 27, 2018
Guanjun Guo, Hanzi Wang, Yan Yan, Jin Zheng, Bo Li

Current face or object detection methods via convolutional neural network (such as OverFeat, R-CNN and DenseNet) explicitly extract multi-scale features based on an image pyramid. However, such a strategy increases the computational burden for face detection. In this paper, we propose a fast face detection method based on discriminative complete features (DCFs) extracted by an elaborately designed convolutional neural network, where face detection is directly performed on the complete feature maps. DCFs have shown the ability of scale invariance, which is beneficial for face detection with high speed and promising performance. Therefore, extracting multi-scale features on an image pyramid employed in the conventional methods is not required in the proposed method, which can greatly improve its efficiency for face detection. Experimental results on several popular face detection datasets show the efficiency and the effectiveness of the proposed method for face detection.

* 11 figures, 30 pages, To appear in Neurocomputing 

  Access Model/Code and Paper