Models, code, and papers for "Wen Zhang":

Manifold Embedded Knowledge Transfer for Brain-Computer Interfaces

Oct 14, 2019
Wen Zhang, Dongrui Wu

Transfer learning makes use of data or knowledge in one problem to help solve a different, yet related, problem. It is particularly useful in brain-computer interfaces (BCIs), for coping with variations among different subjects and/or tasks. This paper considers offline unsupervised cross-subject electroencephalogram (EEG) classification, i.e., we have labeled EEG trials from one or more source subjects, but only unlabeled EEG trials from the target subject. We propose a novel manifold embedded knowledge transfer (MEKT) approach, which first aligns the covariance matrices of the EEG trials in the Riemannian manifold, extracts features in the tangent space, and then performs domain adaptation by minimizing the joint probability distribution shift between the source and the target domains, while preserving their geometric structures. MEKT can cope with one or multiple source domains, and can be computed efficiently. We also propose a domain transferability estimation (DTE) approach to identify the most beneficial source domains, in case there are a large number of source domains. Experiments on four EEG datasets from two different BCI paradigms demonstrated that MEKT outperformed several state-of-the-art transfer learning approaches, and DTE can reduce more than half of the computational cost when the number of source subjects is large, with little sacrifice of classification accuracy.

  Click for Model/Code and Paper
Geometric Brain Surface Network For Brain Cortical Parcellation

Sep 13, 2019
Wen Zhang, Yalin Wang

A large number of surface-based analyses on brain imaging data adopt some specific brain atlases to better assess structural and functional changes in one or more brain regions. In these analyses, it is necessary to obtain an anatomically correct surface parcellation scheme in an individual brain by referring to the given atlas. Traditional ways to accomplish this goal are through a designed surface-based registration or hand-crafted surface features, although both of them are time-consuming. A recent deep learning approach depends on a regular spherical parameterization of the mesh, which is computationally prohibitive in some cases and may also demand further post-processing to refine the network output. Therefore, an accurate and fully-automatic cortical surface parcellation scheme directly working on the original brain surfaces would be highly advantageous. In this study, we propose an end-to-end deep brain cortical parcellation network, called \textbf{DBPN}. Through intrinsic and extrinsic graph convolution kernels, DBPN dynamically deciphers neighborhood graph topology around each vertex and encodes the deciphered knowledge into node features. Eventually, a non-linear mapping between the node features and parcellation labels is constructed. Our model is a two-stage deep network which contains a coarse parcellation network with a U-shape structure and a refinement network to fine-tune the coarse results. We evaluate our model in a large public dataset and our work achieves superior performance than state-of-the-art baseline methods in both accuracy and efficiency

* GLMI in Conjunction with MICCAI 2019 
* 8 pages 

  Click for Model/Code and Paper
Effective and Extensible Feature Extraction Method Using Genetic Algorithm-Based Frequency-Domain Feature Search for Epileptic EEG Multi-classification

Jan 22, 2017
Tingxi Wen, Zhongnan Zhang

In this paper, a genetic algorithm-based frequency-domain feature search (GAFDS) method is proposed for the electroencephalogram (EEG) analysis of epilepsy. In this method, frequency-domain features are first searched and then combined with nonlinear features. Subsequently, these features are selected and optimized to classify EEG signals. The extracted features are analyzed experimentally. The features extracted by GAFDS show remarkable independence, and they are superior to the nonlinear features in terms of the ratio of inter-class distance and intra-class distance. Moreover, the proposed feature search method can additionally search for features of instantaneous frequency in a signal after Hilbert transformation. The classification results achieved using these features are reasonable, thus, GAFDS exhibits good extensibility. Multiple classic classifiers (i.e., $k$-nearest neighbor, linear discriminant analysis, decision tree, AdaBoost, multilayer perceptron, and Na\"ive Bayes) achieve good results by using the features generated by GAFDS method and the optimized selection. Specifically, the accuracies for the two-classification and three-classification problems may reach up to 99% and 97%, respectively. Results of several cross-validation experiments illustrate that GAFDS is effective in feature extraction for EEG classification. Therefore, the proposed feature selection and optimization model can improve classification accuracy.

* 17 pages, 9 figures 

  Click for Model/Code and Paper
Crowdsourcing Data Acquisition via Social Networks

May 14, 2019
Wen Zhang, Yao Zhang, Dengji Zhao

We consider a requester who acquires a set of data (e.g. images) that is not owned by one party. In order to collect all the data, crowdsourcing mechanisms have been widely used to seek help from the crowd. However, existing mechanisms rely on third-party platforms, and the workers from these platforms are not necessarily helpful and redundant data are also not properly handled. To combat this problem, we propose a novel crowdsourcing mechanism based on social networks, where the rewards of the workers are calculated by information entropy and a modified Shapley value. This mechanism incentivizes the workers from the network to not only provide all data they have, but also further invite their neighbours to offer more data. Eventually, the mechanism is able to acquire all data from all workers on the network with a constrained reward spending.

  Click for Model/Code and Paper
Redistribution Mechanism Design on Networks

Oct 21, 2019
Wen Zhang, Dengji Zhao, Hanyu Chen

Redistribution mechanisms have been proposed for more efficient resource allocation but not for profit. We consider redistribution mechanism design for the first time in a setting where participants are connected and the resource owner is only aware of her neighbours. In this setting, to make the resource allocation more efficient, the resource owner has to inform the others who are not her neighbours, but her neighbours do not want more participants to compete with them. Hence, the goal is to design a redistribution mechanism such that participants are incentivized to invite more participants and the resource owner does not earn or lose much money from the allocation. We first show that existing redistribution mechanisms cannot be directly applied in the network setting to achieve the goal. Then we propose a novel network-based redistribution mechanism such that all participants in the network are invited, the allocation is more efficient and the resource owner has no deficit.

  Click for Model/Code and Paper
Neural Learning of Online Consumer Credit Risk

Jun 05, 2019
Di Wang, Qi Wu, Wen Zhang

This paper takes a deep learning approach to understand consumer credit risk when e-commerce platforms issue unsecured credit to finance customers' purchase. The "NeuCredit" model can capture both serial dependences in multi-dimensional time series data when event frequencies in each dimension differ. It also captures nonlinear cross-sectional interactions among different time-evolving features. Also, the predicted default probability is designed to be interpretable such that risks can be decomposed into three components: the subjective risk indicating the consumers' willingness to repay, the objective risk indicating their ability to repay, and the behavioral risk indicating consumers' behavioral differences. Using a unique dataset from one of the largest global e-commerce platforms, we show that the inclusion of shopping behavioral data, besides conventional payment records, requires a deep learning approach to extract the information content of these data, which turns out significantly enhancing forecasting performance than the traditional machine learning methods.

* 49 pages, 11 tables, 7 figures 

  Click for Model/Code and Paper
Fixed-price Diffusion Mechanism Design

May 14, 2019
Tianyi Zhang, Dengji Zhao, Wen Zhang, Xuming He

We consider a fixed-price mechanism design setting where a seller sells one item via a social network, but the seller can only directly communicate with her neighbours initially. Each other node in the network is a potential buyer with a valuation derived from a common distribution. With a standard fixed-price mechanism, the seller can only sell the item among her neighbours. To improve her revenue, she needs more buyers to join in the sale. To achieve this, we propose the very first fixed-price mechanism to incentivize the seller's neighbours to inform their neighbours about the sale and to eventually inform all buyers in the network to improve seller's revenue. Compared with the existing mechanisms for the same purpose, our mechanism does not require the buyers to reveal their valuations and it is computationally easy. More importantly, it guarantees that the improved revenue is at least 1/2 of the optimal.

  Click for Model/Code and Paper
Regularized Wasserstein Means Based on Variational Transportation

Dec 02, 2018
Liang Mi, Wen Zhang, Yalin Wang

We raise the problem of regularizing Wasserstein means and propose several terms tailored to tackle different problems. Our formulation is based on variational transportation to distribute a sparse discrete measure into the target domain without mass splitting. The resulting sparse representation well captures the desired property of the domain while maintaining a small reconstruction error. We demonstrate the scalability and robustness of our method with examples of domain adaptation and skeleton layout.

* Comments are welcomed 

  Click for Model/Code and Paper
ISIC 2018-A Method for Lesion Segmentation

Jul 21, 2018
Hongdiao Wen, Rongjian Xu, Tie Zhang

Our team participate in the challenge of Task 1: Lesion Boundary Segmentation , and use a combined network, one of which is designed by ourselves named updcnn net and another is an improved VGG 16-layer net. Updcnn net uses reduced size images for training, and VGG 16-layer net utilizes large size images. Image enhancement is used to get a richer data set. We use boxes in the VGG 16-layer net network for local attention regularization to fine-tune the loss function, which can increase the number of training data, and also make the model more robust. In the test, the model is used for joint testing and achieves good results.

  Click for Model/Code and Paper
Group-based Sparse Representation for Image Restoration

May 14, 2014
Jian Zhang, Debin Zhao, Wen Gao

Traditional patch-based sparse representation modeling of natural images usually suffer from two problems. First, it has to solve a large-scale optimization problem with high computational complexity in dictionary learning. Second, each patch is considered independently in dictionary learning and sparse coding, which ignores the relationship among patches, resulting in inaccurate sparse coding coefficients. In this paper, instead of using patch as the basic unit of sparse representation, we exploit the concept of group as the basic unit of sparse representation, which is composed of nonlocal patches with similar structures, and establish a novel sparse representation modeling of natural images, called group-based sparse representation (GSR). The proposed GSR is able to sparsely represent natural images in the domain of group, which enforces the intrinsic local sparsity and nonlocal self-similarity of images simultaneously in a unified framework. Moreover, an effective self-adaptive dictionary learning method for each group with low complexity is designed, rather than dictionary learning from natural images. To make GSR tractable and robust, a split Bregman based technique is developed to solve the proposed GSR-driven minimization problem for image restoration efficiently. Extensive experiments on image inpainting, image deblurring and image compressive sensing recovery manifest that the proposed GSR modeling outperforms many current state-of-the-art schemes in both PSNR and visual perception.

* 34 pages, 6 tables, 19 figures, to be published in IEEE Transactions on Image Processing; Project, Code and High resolution PDF version can be found: arXiv admin note: text overlap with arXiv:1404.7566 

  Click for Model/Code and Paper
SWAF: Swarm Algorithm Framework for Numerical Optimization

May 25, 2005
Xiao-Feng Xie, Wen-Jun Zhang

A swarm algorithm framework (SWAF), realized by agent-based modeling, is presented to solve numerical optimization problems. Each agent is a bare bones cognitive architecture, which learns knowledge by appropriately deploying a set of simple rules in fast and frugal heuristics. Two essential categories of rules, the generate-and-test and the problem-formulation rules, are implemented, and both of the macro rules by simple combination and subsymbolic deploying of multiple rules among them are also studied. Experimental results on benchmark problems are presented, and performance comparison between SWAF and other existing algorithms indicates that it is efficiently.

* Genetic and Evolutionary Computation Conference (GECCO), Part I, 2004: 238-250 (LNCS 3102) 

  Click for Model/Code and Paper
Meta Relational Learning for Few-Shot Link Prediction in Knowledge Graphs

Sep 04, 2019
Mingyang Chen, Wen Zhang, Wei Zhang, Qiang Chen, Huajun Chen

Link prediction is an important way to complete knowledge graphs (KGs), while embedding-based methods, effective for link prediction in KGs, perform poorly on relations that only have a few associative triples. In this work, we propose a Meta Relational Learning (MetaR) framework to do the common but challenging few-shot link prediction in KGs, namely predicting new triples about a relation by only observing a few associative triples. We solve few-shot link prediction by focusing on transferring relation-specific meta information to make model learn the most important knowledge and learn faster, corresponding to relation meta and gradient meta respectively in MetaR. Empirically, our model achieves state-of-the-art results on few-shot link prediction KG benchmarks.

* Accepted by EMNLP 2019 

  Click for Model/Code and Paper
End-to-end Hand Mesh Recovery from a Monocular RGB Image

Mar 09, 2019
Xiong Zhang, Qiang Li, Wenbo Zhang, Wen Zheng

In this paper, we present a HAnd Mesh Recovery (HAMR) framework to tackle the problem of reconstructing the full 3D mesh of a human hand from a single RGB image. In contrast to existing research on 2D or 3D hand pose estimation from RGB or/and depth image data, HAMR can provide a more expressive and useful mesh representation for monocular hand image understanding. In particular, the mesh representation is achieved by parameterizing a generic 3D hand model with shape and relative 3D joint angles. By utilizing this mesh representation, we can easily compute the 3D joint locations via linear interpolations between the vertexes of the mesh, while obtain the 2D joint locations with a projection of the 3D joints.To this end, a differentiable re-projection loss can be defined in terms of the derived representations and the ground-truth labels, thus making our framework end-to-end trainable.Qualitative experiments show that our framework is capable of recovering appealing 3D hand mesh even in the presence of severe occlusions.Quantitatively, our approach also outperforms the state-of-the-art methods for both 2D and 3D hand pose estimation from a monocular RGB image on several benchmark datasets.

* conference 10 pages 

  Click for Model/Code and Paper
A Gated Peripheral-Foveal Convolutional Neural Network for Unified Image Aesthetic Prediction

Dec 19, 2018
Xiaodan Zhang, Xinbo Gao, Wen Lu, Lihuo He

Learning fine-grained details is a key issue in image aesthetic assessment. Most of the previous methods extract the fine-grained details via random cropping strategy, which may undermine the integrity of semantic information. Extensive studies show that humans perceive fine-grained details with a mixture of foveal vision and peripheral vision. Fovea has the highest possible visual acuity and is responsible for seeing the details. The peripheral vision is used for perceiving the broad spatial scene and selecting the attended regions for the fovea. Inspired by these observations, we propose a Gated Peripheral-Foveal Convolutional Neural Network (GPF-CNN). It is a dedicated double-subnet neural network, i.e. a peripheral subnet and a foveal subnet. The former aims to mimic the functions of peripheral vision to encode the holistic information and provide the attended regions. The latter aims to extract fine-grained features on these key regions. Considering that the peripheral vision and foveal vision play different roles in processing different visual stimuli, we further employ a gated information fusion (GIF) network to weight their contributions. The weights are determined through the fully connected layers followed by a sigmoid function. We conduct comprehensive experiments on the standard AVA and datasets for unified aesthetic prediction tasks: (i) aesthetic quality classification; (ii) aesthetic score regression; and (iii) aesthetic score distribution prediction. The experimental results demonstrate the effectiveness of the proposed method.

* 11 pages 

  Click for Model/Code and Paper
Performance Analysis of NDT-based Graph SLAM for Autonomous Vehicle in Diverse Typical Driving Scenarios of Hong Kong

Nov 01, 2018
Weisong Wen, Li-Ta Hsu, Guohao Zhang

Robust and lane-level positioning is essential for autonomous vehicles. As an irreplaceable sensor, LiDAR can provide continuous and high-frequency pose estimation by means of mapping, on condition that enough environment features are available. The error of mapping can accumulate over time. Therefore, LiDAR is usually integrated with other sensors. In diverse urban scenarios, the environment feature availability relies heavily on the traffic (moving and static objects) and the degree of urbanization. Common LiDAR-based SLAM demonstrations tend to be studied in light traffic and less urbanized area. However, its performance can be severely challenged in deep urbanized cities, such as Hong Kong, Tokyo, and New York with dense traffic and tall buildings. This paper proposes to analyze the performance of standalone NDT-based graph SLAM and its reliability estimation in diverse urban scenarios to further evaluate the relationship between the performance of LiDAR-based SLAM and scenario conditions. The normal distribution transform (NDT) is employed to calculate the transformation between frames of point clouds. Then, the LiDAR odometry is performed based on the calculated continuous transformation. The state-of-the-art graph-based optimization is used to integrate the LiDAR odometry measurements to implement optimization. The 3D building models are generated and the definition of the degree of urbanization based on Skyplot is proposed. Experiments are implemented in different scenarios with different degrees of urbanization and traffic conditions. The results show that the performance of the LiDAR-based SLAM using NDT is strongly related to the traffic condition and degree of urbanization.

* 24 pages, 19 figures 

  Click for Model/Code and Paper
Structured Local Optima in Sparse Blind Deconvolution

Jun 01, 2018
Yuqian Zhang, Han-Wen Kuo, John Wright

Blind deconvolution is a ubiquitous problem of recovering two unknown signals from their convolution. Unfortunately, this is an ill-posed problem in general. This paper focuses on the {\em short and sparse} blind deconvolution problem, where the one unknown signal is short and the other one is sparsely and randomly supported. This variant captures the structure of the unknown signals in several important applications. We assume the short signal to have unit $\ell^2$ norm and cast the blind deconvolution problem as a nonconvex optimization problem over the sphere. We demonstrate that (i) in a certain region of the sphere, every local optimum is close to some shift truncation of the ground truth, and (ii) for a generic short signal of length $k$, when the sparsity of activation signal $\theta\lesssim k^{-2/3}$ and number of measurements $m\gtrsim poly(k)$, a simple initialization method together with a descent algorithm which escapes strict saddle points recovers a near shift truncation of the ground truth kernel.

* 66 pages, 6 figures 

  Click for Model/Code and Paper
Exclusion of GNSS NLOS Receptions Caused by Dynamic Objects in Heavy Traffic Urban Scenarios Using Real-Time 3D Point Cloud: An Approach without 3D Maps

Apr 29, 2018
Weisong Wen, Guohao Zhang, Li-Ta Hsu

Absolute positioning is an essential factor for the arrival of autonomous driving. Global Navigation Satellites System (GNSS) receiver provides absolute localization for it. GNSS solution can provide satisfactory positioning in open or sub-urban areas, however, its performance suffered in super-urbanized area due to the phenomenon which are well-known as multipath effects and NLOS receptions. The effects dominate GNSS positioning performance in the area. The recent proposed 3D map aided (3DMA) GNSS can mitigate most of the multipath effects and NLOS receptions caused by buildings based on 3D city models. However, the same phenomenon caused by moving objects in urban area is currently not modelled in the 3D geographic information system (GIS). Moving objects with tall height, such as the double-decker bus, can also cause NLOS receptions because of the blockage of GNSS signals by surface of objects. Therefore, we present a novel method to exclude the NLOS receptions caused by double-decker bus in highly urbanized area, Hong Kong. To estimate the geometry dimension and orientation relative to GPS receiver, a Euclidean cluster algorithm and a classification method are used to detect the double-decker buses and calculate their relative locations. To increase the accuracy and reliability of the proposed NLOS exclusion method, an NLOS exclusion criterion is proposed to exclude the blocked satellites considering the elevation, signal noise ratio (SNR) and horizontal dilution of precision (HDOP). Finally, GNSS positioning is estimated by weighted least square (WLS) method using the remaining satellites after the NLOS exclusion. A static experiment was performed near a double-decker bus stop in Hong Kong, which verified the effectiveness of the proposed method.

* 8 pages, accepted by the IEEE/ION PLANS 2018 

  Click for Model/Code and Paper
Image Compressive Sensing Recovery Using Adaptively Learned Sparsifying Basis via L0 Minimization

Apr 30, 2014
Jian Zhang, Chen Zhao, Debin Zhao, Wen Gao

From many fewer acquired measurements than suggested by the Nyquist sampling theory, compressive sensing (CS) theory demonstrates that, a signal can be reconstructed with high probability when it exhibits sparsity in some domain. Most of the conventional CS recovery approaches, however, exploited a set of fixed bases (e.g. DCT, wavelet and gradient domain) for the entirety of a signal, which are irrespective of the non-stationarity of natural signals and cannot achieve high enough degree of sparsity, thus resulting in poor CS recovery performance. In this paper, we propose a new framework for image compressive sensing recovery using adaptively learned sparsifying basis via L0 minimization. The intrinsic sparsity of natural images is enforced substantially by sparsely representing overlapped image patches using the adaptively learned sparsifying basis in the form of L0 norm, greatly reducing blocking artifacts and confining the CS solution space. To make our proposed scheme tractable and robust, a split Bregman iteration based technique is developed to solve the non-convex L0 minimization problem efficiently. Experimental results on a wide range of natural images for CS recovery have shown that our proposed algorithm achieves significant performance improvements over many current state-of-the-art schemes and exhibits good convergence property.

* 31 pages, 4 tables, 12 figures, to be published at Signal Processing, Code available: 

  Click for Model/Code and Paper
A Stochastic Extra-Step Quasi-Newton Method for Nonsmooth Nonconvex Optimization

Oct 21, 2019
Minghan Yang, Andre Milzarek, Zaiwen Wen, Tong Zhang

In this paper, a novel stochastic extra-step quasi-Newton method is developed to solve a class of nonsmooth nonconvex composite optimization problems. We assume that the gradient of the smooth part of the objective function can only be approximated by stochastic oracles. The proposed method combines general stochastic higher order steps derived from an underlying proximal type fixed-point equation with additional stochastic proximal gradient steps to guarantee convergence. Based on suitable bounds on the step sizes, we establish global convergence to stationary points in expectation and an extension of the approach using variance reduction techniques is discussed. Motivated by large-scale and big data applications, we investigate a stochastic coordinate-type quasi-Newton scheme that allows to generate cheap and tractable stochastic higher order directions. Finally, the proposed algorithm is tested on large-scale logistic regression and deep learning problems and it is shown that it compares favorably with other state-of-the-art methods.

* 41 pages 

  Click for Model/Code and Paper
Policy Poisoning in Batch Reinforcement Learning and Control

Oct 13, 2019
Yuzhe Ma, Xuezhou Zhang, Wen Sun, Xiaojin Zhu

We study a security threat to batch reinforcement learning and control where the attacker aims to poison the learned policy. The victim is a reinforcement learner / controller which first estimates the dynamics and the rewards from a batch data set, and then solves for the optimal policy with respect to the estimates. The attacker can modify the data set slightly before learning happens, and wants to force the learner into learning a target policy chosen by the attacker. We present a unified framework for solving batch policy poisoning attacks, and instantiate the attack on two standard victims: tabular certainty equivalence learner in reinforcement learning and linear quadratic regulator in control. We show that both instantiation result in a convex optimization problem on which global optimality is guaranteed, and provide analysis on attack feasibility and attack cost. Experiments show the effectiveness of policy poisoning attacks.

* NeurIPS 2019 

  Click for Model/Code and Paper