Models, code, and papers for "Yuhao Chen":

A Voice Interactive Multilingual Student Support System using IBM Watson

Dec 20, 2019
Kennedy Ralston, Yuhao Chen, Haruna Isah, Farhana Zulkernine

Systems powered by artificial intelligence are being developed to be more user-friendly by communicating with users in a progressively human-like conversational way. Chatbots, also known as dialogue systems, interactive conversational agents, or virtual agents are an example of such systems used in a wide variety of applications ranging from customer support in the business domain to companionship in the healthcare sector. It is becoming increasingly important to develop chatbots that can best respond to the personalized needs of their users so that they can be as helpful to the user as possible in a real human way. This paper investigates and compares three popular existing chatbots API offerings and then propose and develop a voice interactive and multilingual chatbot that can effectively respond to users mood, tone, and language using IBM Watson Assistant, Tone Analyzer, and Language Translator. The chatbot was evaluated using a use case that was targeted at responding to users needs regarding exam stress based on university students survey data generated using Google Forms. The results of measuring the chatbot effectiveness at analyzing responses regarding exam stress indicate that the chatbot responding appropriately to the user queries regarding how they are feeling about exams 76.5%. The chatbot could also be adapted for use in other application areas such as student info-centers, government kiosks, and mental health support systems.

* 6 pages 

  Click for Model/Code and Paper
Weighted Hausdorff Distance: A Loss Function For Object Localization

Jun 20, 2018
Javier Ribera, David Güera, Yuhao Chen, Edward Delp

Recent advances in Convolutional Neural Networks (CNN) have achieved remarkable results in localizing objects in images. In these networks, the training procedure usually requires providing bounding boxes or the maximum number of expected objects. In this paper, we address the task of estimating object locations without annotated bounding boxes, which are typically hand-drawn and time consuming to label. We propose a loss function that can be used in any Fully Convolutional Network (FCN) to estimate object locations. This loss function is a modification of the Average Hausdorff Distance between two unordered sets of points. The proposed method does not require one to "guess" the maximum number of objects in the image, and has no notion of bounding boxes, region proposals, or sliding windows. We evaluate our method with three datasets designed to locate people's heads, pupil centers and plant centers. We report an average precision and recall of 94% for the three datasets, and an average location error of 6 pixels in 256x256 images.

* 19 pages, single-column 

  Click for Model/Code and Paper
Segmental Convolutional Neural Networks for Detection of Cardiac Abnormality With Noisy Heart Sound Recordings

Dec 06, 2016
Yuhao Zhang, Sandeep Ayyar, Long-Huei Chen, Ethan J. Li

Heart diseases constitute a global health burden, and the problem is exacerbated by the error-prone nature of listening to and interpreting heart sounds. This motivates the development of automated classification to screen for abnormal heart sounds. Existing machine learning-based systems achieve accurate classification of heart sound recordings but rely on expert features that have not been thoroughly evaluated on noisy recordings. Here we propose a segmental convolutional neural network architecture that achieves automatic feature learning from noisy heart sound recordings. Our experiments show that our best model, trained on noisy recording segments acquired with an existing hidden semi-markov model-based approach, attains a classification accuracy of 87.5% on the 2016 PhysioNet/CinC Challenge dataset, compared to the 84.6% accuracy of the state-of-the-art statistical classifier trained and evaluated on the same dataset. Our results indicate the potential of using neural network-based methods to increase the accuracy of automated classification of heart sound recordings for improved screening of heart diseases.

* This work was finished in May 2016, and remains unpublished until December 2016 due to a request from the data provider 

  Click for Model/Code and Paper
Estimating Phenotypic Traits From UAV Based RGB Imagery

Jul 02, 2018
Javier Ribera, Fangning He, Yuhao Chen, Ayman F. Habib, Edward J. Delp

In many agricultural applications one wants to characterize physical properties of plants and use the measurements to predict, for example biomass and environmental influence. This process is known as phenotyping. Traditional collection of phenotypic information is labor-intensive and time-consuming. Use of imagery is becoming popular for phenotyping. In this paper, we present methods to estimate traits of sorghum plants from RBG cameras on board of an unmanned aerial vehicle (UAV). The position and orientation of the imagery together with the coordinates of sparse points along the area of interest are derived through a new triangulation method. A rectified orthophoto mosaic is then generated from the imagery. The number of leaves is estimated and a model-based method to analyze the leaf morphology for leaf segmentation is proposed. We present a statistical model to find the location of each individual sorghum plant.

* 8 pages, double-column 

  Click for Model/Code and Paper
Adversarial Defense Through Network Profiling Based Path Extraction

May 09, 2019
Yuxian Qiu, Jingwen Leng, Cong Guo, Quan Chen, Chao Li, Minyi Guo, Yuhao Zhu

Recently, researchers have started decomposing deep neural network models according to their semantics or functions. Recent work has shown the effectiveness of decomposed functional blocks for defending adversarial attacks, which add small input perturbation to the input image to fool the DNN models. This work proposes a profiling-based method to decompose the DNN models to different functional blocks, which lead to the effective path as a new approach to exploring DNNs' internal organization. Specifically, the per-image effective path can be aggregated to the class-level effective path, through which we observe that adversarial images activate effective path different from normal images. We propose an effective path similarity-based method to detect adversarial images with an interpretable model, which achieve better accuracy and broader applicability than the state-of-the-art technique.


  Click for Model/Code and Paper
ZhuSuan: A Library for Bayesian Deep Learning

Sep 18, 2017
Jiaxin Shi, Jianfei Chen, Jun Zhu, Shengyang Sun, Yucen Luo, Yihong Gu, Yuhao Zhou

In this paper we introduce ZhuSuan, a python probabilistic programming library for Bayesian deep learning, which conjoins the complimentary advantages of Bayesian methods and deep learning. ZhuSuan is built upon Tensorflow. Unlike existing deep learning libraries, which are mainly designed for deterministic neural networks and supervised tasks, ZhuSuan is featured for its deep root into Bayesian inference, thus supporting various kinds of probabilistic models, including both the traditional hierarchical Bayesian models and recent deep generative models. We use running examples to illustrate the probabilistic programming on ZhuSuan, including Bayesian logistic regression, variational auto-encoders, deep sigmoid belief networks and Bayesian recurrent neural networks.

* The GitHub page is at https://github.com/thu-ml/zhusuan 

  Click for Model/Code and Paper
Deep Structured Models For Group Activity Recognition

Jun 12, 2015
Zhiwei Deng, Mengyao Zhai, Lei Chen, Yuhao Liu, Srikanth Muralidharan, Mehrsan Javan Roshtkhari, Greg Mori

This paper presents a deep neural-network-based hierarchical graphical model for individual and group activity recognition in surveillance scenes. Deep networks are used to recognize the actions of individual people in a scene. Next, a neural-network-based hierarchical graphical model refines the predicted labels for each class by considering dependencies between the classes. This refinement step mimics a message-passing step similar to inference in a probabilistic graphical model. We show that this approach can be effective in group activity recognition, with the deep graphical model improving recognition rates over baseline methods.


  Click for Model/Code and Paper
DARB: A Density-Aware Regular-Block Pruning for Deep Neural Networks

Nov 20, 2019
Ao Ren, Tao Zhang, Yuhao Wang, Sheng Lin, Peiyan Dong, Yen-kuang Chen, Yuan Xie, Yanzhi Wang

The rapidly growing parameter volume of deep neural networks (DNNs) hinders the artificial intelligence applications on resource constrained devices, such as mobile and wearable devices. Neural network pruning, as one of the mainstream model compression techniques, is under extensive study to reduce the number of parameters and computations. In contrast to irregular pruning that incurs high index storage and decoding overhead, structured pruning techniques have been proposed as the promising solutions. However, prior studies on structured pruning tackle the problem mainly from the perspective of facilitating hardware implementation, without analyzing the characteristics of sparse neural networks. The neglect on the study of sparse neural networks causes inefficient trade-off between regularity and pruning ratio. Consequently, the potential of structurally pruning neural networks is not sufficiently mined. In this work, we examine the structural characteristics of the irregularly pruned weight matrices, such as the diverse redundancy of different rows, the sensitivity of different rows to pruning, and the positional characteristics of retained weights. By leveraging the gained insights as a guidance, we first propose the novel block-max weight masking (BMWM) method, which can effectively retain the salient weights while imposing high regularity to the weight matrix. As a further optimization, we propose a density-adaptive regular-block (DARB) pruning that outperforms prior structured pruning work with high pruning ratio and decoding efficiency. Our experimental results show that DARB can achieve 13$\times$ to 25$\times$ pruning ratio, which are 2.8$\times$ to 4.3$\times$ improvements than the state-of-the-art counterparts on multiple neural network models and tasks. Moreover, DARB can achieve 14.3$\times$ decoding efficiency than block pruning with higher pruning ratio.

* This paper has been accepted by AAAI'2020 

  Click for Model/Code and Paper
Water Supply Prediction Based on Initialized Attention Residual Network

Dec 17, 2019
Yuhao Long, Jingcheng Wang, Jingyi Wang

Real-time and accurate water supply forecast is crucial for water plant. However, most existing methods are likely affected by factors such as weather and holidays, which lead to a decline in the reliability of water supply prediction. In this paper, we address a generic artificial neural network, called Initialized Attention Residual Network (IARN), which is combined with an attention module and residual modules. Specifically, instead of continuing to use the recurrent neural network (RNN) in time-series tasks, we try to build a convolution neural network (CNN)to recede the disturb from other factors, relieve the limitation of memory size and get a more credible results. Our method achieves state-of-the-art performance on several data sets, in terms of accuracy, robustness and generalization ability.

* 7 pages, 4 figures. This work has been submitted to IFAC for possible publication 

  Click for Model/Code and Paper
Tigris: Architecture and Algorithms for 3D Perception in Point Clouds

Nov 21, 2019
Tiancheng Xu, Boyuan Tian, Yuhao Zhu

Machine perception applications are increasingly moving toward manipulating and processing 3D point cloud. This paper focuses on point cloud registration, a key primitive of 3D data processing widely used in high-level tasks such as odometry, simultaneous localization and mapping, and 3D reconstruction. As these applications are routinely deployed in energy-constrained environments, real-time and energy-efficient point cloud registration is critical. We present Tigris, an algorithm-architecture co-designed system specialized for point cloud registration. Through an extensive exploration of the registration pipeline design space, we find that, while different design points make vastly different trade-offs between accuracy and performance, KD-tree search is a common performance bottleneck, and thus is an ideal candidate for architectural specialization. While KD-tree search is inherently sequential, we propose an acceleration-amenable data structure and search algorithm that exposes different forms of parallelism of KD-tree search in the context of point cloud registration. The co-designed accelerator systematically exploits the parallelism while incorporating a set of architectural techniques that further improve the accelerator efficiency. Overall, Tigris achieves 77.2$\times$ speedup and 7.4$\times$ power reduction in KD-tree search over an RTX 2080 Ti GPU, which translates to a 41.7% registration performance improvements and 3.0$\times$ power reduction.

* Published at MICRO-52 (52nd IEEE/ACM International Symposium on Microarchitecture); Tiancheng Xu and Boyuan Tian are co-primary authors 

  Click for Model/Code and Paper
Semantic Role Labeling with Associated Memory Network

Aug 05, 2019
Chaoyu Guan, Yuhao Cheng, Hai Zhao

Semantic role labeling (SRL) is a task to recognize all the predicate-argument pairs of a sentence, which has been in a performance improvement bottleneck after a series of latest works were presented. This paper proposes a novel syntax-agnostic SRL model enhanced by the proposed associated memory network (AMN), which makes use of inter-sentence attention of label-known associated sentences as a kind of memory to further enhance dependency-based SRL. In detail, we use sentences and their labels from train dataset as an associated memory cue to help label the target sentence. Furthermore, we compare several associated sentences selecting strategies and label merging methods in AMN to find and utilize the label of associated sentences while attending them. By leveraging the attentive memory from known training data, Our full model reaches state-of-the-art on CoNLL-2009 benchmark datasets for syntax-agnostic setting, showing a new effective research line of SRL enhancement other than exploiting external resources such as well pre-trained language models.

* Published at NAACL 2019; This is camera Ready version; Code is available at https://github.com/Frozenmad/AMN_SRL 

  Click for Model/Code and Paper
Causal Discovery from Incomplete Data: A Deep Learning Approach

Jan 15, 2020
Yuhao Wang, Vlado Menkovski, Hao Wang, Xin Du, Mykola Pechenizkiy

As systems are getting more autonomous with the development of artificial intelligence, it is important to discover the causal knowledge from observational sensory inputs. By encoding a series of cause-effect relations between events, causal networks can facilitate the prediction of effects from a given action and analyze their underlying data generation mechanism. However, missing data are ubiquitous in practical scenarios. Directly performing existing casual discovery algorithms on partially observed data may lead to the incorrect inference. To alleviate this issue, we proposed a deep learning framework, dubbed Imputated Causal Learning (ICL), to perform iterative missing data imputation and causal structure discovery. Through extensive simulations on both synthetic and real data, we show that ICL can outperform state-of-the-art methods under different missing data mechanisms.


  Click for Model/Code and Paper
Multimodal Unified Attention Networks for Vision-and-Language Interactions

Aug 19, 2019
Zhou Yu, Yuhao Cui, Jun Yu, Dacheng Tao, Qi Tian

Learning an effective attention mechanism for multimodal data is important in many vision-and-language tasks that require a synergic understanding of both the visual and textual contents. Existing state-of-the-art approaches use co-attention models to associate each visual object (e.g., image region) with each textual object (e.g., query word). Despite the success of these co-attention models, they only model inter-modal interactions while neglecting intra-modal interactions. Here we propose a general `unified attention' model that simultaneously captures the intra- and inter-modal interactions of multimodal features and outputs their corresponding attended representations. By stacking such unified attention blocks in depth, we obtain the deep Multimodal Unified Attention Network (MUAN), which can seamlessly be applied to the visual question answering (VQA) and visual grounding tasks. We evaluate our MUAN models on two VQA datasets and three visual grounding datasets, and the results show that MUAN achieves top-level performance on both tasks without bells and whistles.

* 11 pages, 7 figures 

  Click for Model/Code and Paper
Deep Modular Co-Attention Networks for Visual Question Answering

Jun 25, 2019
Zhou Yu, Jun Yu, Yuhao Cui, Dacheng Tao, Qi Tian

Visual Question Answering (VQA) requires a fine-grained and simultaneous understanding of both the visual content of images and the textual content of questions. Therefore, designing an effective `co-attention' model to associate key words in questions with key objects in images is central to VQA performance. So far, most successful attempts at co-attention learning have been achieved by using shallow models, and deep co-attention models show little improvement over their shallow counterparts. In this paper, we propose a deep Modular Co-Attention Network (MCAN) that consists of Modular Co-Attention (MCA) layers cascaded in depth. Each MCA layer models the self-attention of questions and images, as well as the guided-attention of images jointly using a modular composition of two basic attention units. We quantitatively and qualitatively evaluate MCAN on the benchmark VQA-v2 dataset and conduct extensive ablation studies to explore the reasons behind MCAN's effectiveness. Experimental results demonstrate that MCAN significantly outperforms the previous state-of-the-art. Our best single model delivers 70.63$\%$ overall accuracy on the test-dev set. Code is available at https://github.com/MILVLG/mcan-vqa.

* Accepted at CVPR 2019 

  Click for Model/Code and Paper
Deep learning based cloud detection for remote sensing images by the fusion of multi-scale convolutional features

Oct 13, 2018
Zhiwei Li, Huanfeng Shen, Qing Cheng, Yuhao Liu, Shucheng You, Zongyi He

Cloud detection is an important preprocessing step for the precise application of optical satellite imagery. In this paper, we propose a deep convolutional neural network based cloud detection method named multi-scale convolutional feature fusion (MSCFF) for remote sensing images. In the network architecture of MSCFF, the encoder and corresponding decoder modules, which provide both local and global context by densifying feature maps with trainable filter banks, are utilized to extract multi-scale and high-level spatial features. The feature maps of multiple scales are then up-sampled and concatenated, and a novel MSCFF module is designed to fuse the features of different scales for the output. The output feature maps of the network are regarded as probability maps, and fed to a binary classifier for the final pixel-wise cloud and cloud shadow segmentation. The MSCFF method was validated on hundreds of globally distributed optical satellite images, with spatial resolutions ranging from 0.5 to 50 m, including Landsat-5/7/8, Gaofen-1/2/4, Sentinel-2, Ziyuan-3, CBERS-04, Huanjing-1, and collected high-resolution images exported from Google Earth. The experimental results indicate that MSCFF has obvious advantages over the traditional rule-based cloud detection methods and the state-of-the-art deep learning models in terms of accuracy, especially in bright surface covered areas. The effectiveness of MSCFF means that it has great promise for the practical application of cloud detection for multiple types of satellite imagery. Our established global high-resolution cloud detection validation dataset has been made available online.

* Under review 

  Click for Model/Code and Paper
An Action Recognition network for specific target based on rMC and RPN

Jun 19, 2019
Mingjie Li, Youqian Feng, Zhonghai Yin, Cheng Zhou, Fanghao Dong, Yuan Lin, Yuhao Dong

The traditional methods of action recognition are not specific for the operator, thus results are easy to be disturbed when other actions are operated in videos. The network based on mixed convolutional resnet and RPN is proposed in this paper. The rMC is tested in the data set of UCF-101 to compare with the method of R3D. The result shows that its correct rate reaches 71.07%. Meanwhile, the action recognition network is tested in our gesture and body posture data sets for specific target. The simulation achieves a good performance in which the running speed reaches 200 FPS. Finally, our model is improved by introducing the regression block and performs better, which shows the great potential of this model.

* 10 pages, 14 figures 

  Click for Model/Code and Paper