Models, code, and papers for "Thao Do":

A Fine-to-Coarse Convolutional Neural Network for 3D Human Action Recognition

Aug 18, 2018
Thao Minh Le, Nakamasa Inoue, Koichi Shinoda

This paper presents a new framework for human action recognition from a 3D skeleton sequence. Previous studies do not fully utilize the temporal relationships between video segments in a human action. Some studies successfully used very deep Convolutional Neural Network (CNN) models but often suffer from the data insufficiency problem. In this study, we first segment a skeleton sequence into distinct temporal segments in order to exploit the correlations between them. The temporal and spatial features of a skeleton sequence are then extracted simultaneously by utilizing a fine-to-coarse (F2C) CNN architecture optimized for human skeleton sequences. We evaluate our proposed method on NTU RGB+D and SBU Kinect Interaction dataset. It achieves 79.6% and 84.6% of accuracies on NTU RGB+D with cross-object and cross-view protocol, respectively, which are almost identical with the state-of-the-art performance. In addition, our method significantly improves the accuracy of the actions in two-person interactions.

* Camera-ready manuscript for BMVC2018 

  Access Model/Code and Paper
Hierarchical Conditional Relation Networks for Video Question Answering

Mar 17, 2020
Thao Minh Le, Vuong Le, Svetha Venkatesh, Truyen Tran

Video question answering (VideoQA) is challenging as it requires modeling capacity to distill dynamic visual artifacts and distant relations and to associate them with linguistic concepts. We introduce a general-purpose reusable neural unit called Conditional Relation Network (CRN) that serves as a building block to construct more sophisticated structures for representation and reasoning over video. CRN takes as input an array of tensorial objects and a conditioning feature, and computes an array of encoded output objects. Model building becomes a simple exercise of replication, rearrangement and stacking of these reusable units for diverse modalities and contextual information. This design thus supports high-order relational and multi-step reasoning. The resulting architecture for VideoQA is a CRN hierarchy whose branches represent sub-videos or clips, all sharing the same question as the contextual condition. Our evaluations on well-known datasets achieved new SoTA results, demonstrating the impact of building a general-purpose reasoning unit on complex domains such as VideoQA.

* CVPR 2020, Oral 
* Check out our code on GitHub at https://github.com/thaolmk54/hcrn-videoqa 

  Access Model/Code and Paper
Multimodal Deep Models for Predicting Affective Responses Evoked by Movies

Sep 17, 2019
Ha Thi Phuong Thao, Dorien Herremans, Gemma Roig

The goal of this study is to develop and analyze multimodal models for predicting experienced affective responses of viewers watching movie clips. We develop hybrid multimodal prediction models based on both the video and audio of the clips. For the video content, we hypothesize that both image content and motion are crucial features for evoked emotion prediction. To capture such information, we extract features from RGB frames and optical flow using pre-trained neural networks. For the audio model, we compute an enhanced set of low-level descriptors including intensity, loudness, cepstrum, linear predictor coefficients, pitch and voice quality. Both visual and audio features are then concatenated to create audio-visual features, which are used to predict the evoked emotion. To classify the movie clips into the corresponding affective response categories, we propose two approaches based on deep neural network models. The first one is based on fully connected layers without memory on the time component, the second incorporates the sequential dependency with a long short-term memory recurrent neural network (LSTM). We perform a thorough analysis of the importance of each feature set. Our experiments reveal that in our set-up, predicting emotions at each time step independently gives slightly better accuracy performance than with the LSTM. Interestingly, we also observe that the optical flow is more informative than the RGB in videos, and overall, models using audio features are more accurate than those based on video features when making the final prediction of evoked emotions.

* Proceedings of the 2nd International Workshop on Computer Vision for Physiological Measurement as part of ICCV. Seoul, South Korea. 2019 
* 10 pages, 7 figures, Preprint accepted for publication in the Proceedings of the 2nd International Workshop on Computer Vision for Physiological Measurement as part of ICCV. Seoul, South Korea. 2019 

  Access Model/Code and Paper
Learning to Reason with Relational Video Representation for Question Answering

Jul 10, 2019
Thao Minh Le, Vuong Le, Svetha Venkatesh, Truyen Tran

How does machine learn to reason about the content of a video in answering a question? A Video QA system must simultaneously understand language, represent visual content over space-time, and iteratively transform these representations in response to lingual content in the query, and finally arriving at a sensible answer. While recent advances in textual and visual question answering have come up with sophisticated visual representation and neural reasoning mechanisms, major challenges in Video QA remain on dynamic grounding of concepts, relations and actions to support the reasoning process. We present a new end-to-end layered architecture for Video QA, which is composed of a question-guided video representation layer and a generic reasoning layer to produce answer. The video is represented using a hierarchical model that encodes visual information about objects, actions and relations in space-time given the textual cues from the question. The encoded representation is then passed to a reasoning module, which in this paper, is implemented as a MAC net. The system is evaluated on the SVQA (synthetic) and TGIF-QA datasets (real), demonstrating state-of-the-art results, with a large margin in the case of multi-step reasoning.


  Access Model/Code and Paper
Deep Learning Based Multi-modal Addressee Recognition in Visual Scenes with Utterances

Sep 12, 2018
Thao Minh Le, Nobuyuki Shimizu, Takashi Miyazaki, Koichi Shinoda

With the widespread use of intelligent systems, such as smart speakers, addressee recognition has become a concern in human-computer interaction, as more and more people expect such systems to understand complicated social scenes, including those outdoors, in cafeterias, and hospitals. Because previous studies typically focused only on pre-specified tasks with limited conversational situations such as controlling smart homes, we created a mock dataset called Addressee Recognition in Visual Scenes with Utterances (ARVSU) that contains a vast body of image variations in visual scenes with an annotated utterance and a corresponding addressee for each scenario. We also propose a multi-modal deep-learning-based model that takes different human cues, specifically eye gazes and transcripts of an utterance corpus, into account to predict the conversational addressee from a specific speaker's view in various real-life conversational scenarios. To the best of our knowledge, we are the first to introduce an end-to-end deep learning model that combines vision and transcripts of utterance for addressee recognition. As a result, our study suggests that future addressee recognition can reach the ability to understand human intention in many social situations previously unexplored, and our modality dataset is a first step in promoting research in this field.

* Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence Main track. Pages 1546-1553 

  Access Model/Code and Paper
Obesity Prediction with EHR Data: A deep learning approach with interpretable elements

Jan 30, 2020
Mehak Gupta, Thao-Ly T. Phan, Timothy Bunnell, Rahmatollah Beheshti

Childhood obesity is a major public health challenge. Obesity in early childhood and adolescence can lead to obesity and other health problems in adulthood. Early prediction and identification of the children at a high risk of developing childhood obesity may help in engaging earlier and more effective interventions to prevent and manage this and other related health conditions. Existing predictive tools designed for childhood obesity primarily rely on traditional regression-type methods without exploiting longitudinal patterns of children's data (ignoring data temporality). In this paper, we present a machine learning model specifically designed for predicting future obesity patterns from generally available items on children's medical history. To do this, we have used a large unaugmented EHR (Electronic Health Record) dataset from a major pediatric health system in the US. We adopt a general LSTM (long short-term memory) network architecture for our model for training over dynamic (sequential) and static (demographic) EHR data. We have additionally included a set embedding and attention layers to compute the feature ranking of each timestamp and attention scores of each hidden layer corresponding to each input timestamp. These feature ranking and attention scores added interpretability at both the features and the timestamp-level.

* 15 pages, 3 Tables, 7 figures 

  Access Model/Code and Paper
Obesity Prediction with EHR Data: A deep learning approach with interpretability

Dec 28, 2019
Mehak Gupta, Thao-Ly T. Phan, George Datto, Timothy Bunnell, Rahmatollah Beheshti

Childhood obesity is a major public health challenge. Obesity in early childhood and adolescence can lead to obesity and other health problems in adulthood. Early prediction and identification of the children at a high risk of developing childhood obesity may help in engaging earlier and more effective interventions to prevent and manage this and other related health conditions. Existing predictive tools designed for childhood obesity primarily rely on traditional regression-type methods without exploiting longitudinal patterns of children's data ignoring data temporality. In this paper, we present a machine learning model specifically designed for predicting future obesity patterns from generally available items on children's medical history. To do this, we have used a large unaugmented EHR (Electronic Health Record) dataset from a major pediatric health system in the US. We adopt a general long short-term memory network architecture for our model for training over dynamic (sequential) and static (demographic) EHR data. We have additionally included a set embedding and attention layers to compute the feature ranking of each timestamp and attention scores of each hidden layer corresponding to each input timestamp. These feature ranking and attention scores added interpretability at both the features and the timestamp level.

* 15 pages, 3 Tables, 7 figures 

  Access Model/Code and Paper
An Interpretable Prediction Model for Obesity Prediction using EHR Data

Dec 18, 2019
Mehak Gupta, Thao-Ly T. Phan, George Datto, Timothy Bunnell, Rahmatollah Beheshti

Childhood obesity is a major public health challenge. Obesity in early childhood and adolescence can lead to obesity and other health risks in adulthood. Early prediction and identification of high-risk populations can help to prevent its development. With early identification, proper interventions can be used for its prevention. In this paper, we build prediction models to predict future BMI from baseline medical history data. We used unaugmented Nemours EHR (Electronic Health Record) data as represented in the PEDSnet (A pediatric Learning Health System) common data model. We trained variety of machine learning models to perform binary classification of obese, and non-obese for children in early childhood ages and during adolescence. We explored if deep learning techniques that can model the temporal nature of EHR data would improve the performance of predicting obesity as compared to other machine learning techniques that ignore temporality. We also added attention layer at top of rnn layer in our model to compute the attention scores of each hidden layer corresponding to each input timestep. The attention score for each timestep were computed as an average score given to all the features associated with the timestep. These attention scores added interpretability at both timestep level and the features associated with the timesteps.

* 17 pages, 3 Tables, 7 figures 

  Access Model/Code and Paper
Planning with State Abstractions for Non-Markovian Task Specifications

May 28, 2019
Yoonseon Oh, Roma Patel, Thao Nguyen, Baichuan Huang, Ellie Pavlick, Stefanie Tellex

Often times, we specify tasks for a robot using temporal language that can also span different levels of abstraction. The example command ``go to the kitchen before going to the second floor'' contains spatial abstraction, given that ``floor'' consists of individual rooms that can also be referred to in isolation ("kitchen", for example). There is also a temporal ordering of events, defined by the word "before". Previous works have used Linear Temporal Logic (LTL) to interpret temporal language (such as "before"), and Abstract Markov Decision Processes (AMDPs) to interpret hierarchical abstractions (such as "kitchen" and "second floor"), separately. To handle both types of commands at once, we introduce the Abstract Product Markov Decision Process (AP-MDP), a novel approach capable of representing non-Markovian reward functions at different levels of abstractions. The AP-MDP framework translates LTL into its corresponding automata, creates a product Markov Decision Process (MDP) of the LTL specification and the environment MDP, and decomposes the problem into subproblems to enable efficient planning with abstractions. AP-MDP performs faster than a non-hierarchical method of solving LTL problems in over 95% of tasks, and this number only increases as the size of the environment domain increases. We also present a neural sequence-to-sequence model trained to translate language commands into LTL expression, and a new corpus of non-Markovian language commands spanning different levels of abstraction. We test our framework with the collected language commands on a drone, demonstrating that our approach enables a robot to efficiently solve temporal commands at different levels of abstraction.


  Access Model/Code and Paper
Dissecting Catastrophic Forgetting in Continual Learning by Deep Visualization

Jan 07, 2020
Giang Nguyen, Shuan Chen, Thao Do, Tae Joon Jun, Ho-Jin Choi, Daeyoung Kim

Interpreting the behaviors of Deep Neural Networks (usually considered as a black box) is critical especially when they are now being widely adopted over diverse aspects of human life. Taking the advancements from Explainable Artificial Intelligent, this paper proposes a novel technique called Auto DeepVis to dissect catastrophic forgetting in continual learning. A new method to deal with catastrophic forgetting named critical freezing is also introduced upon investigating the dilemma by Auto DeepVis. Experiments on a captioning model meticulously present how catastrophic forgetting happens, particularly showing which components are forgetting or changing. The effectiveness of our technique is then assessed; and more precisely, critical freezing claims the best performance on both previous and coming tasks over baselines, proving the capability of the investigation. Our techniques could not only be supplementary to existing solutions for completely eradicating catastrophic forgetting for life-long learning but also explainable.

* 8 pages, 6 figures, 3 tables 

  Access Model/Code and Paper
Multi-Channel Neural Network for Assessing Neonatal Pain from Videos

Aug 25, 2019
Md Sirajus Salekin, Ghada Zamzmi, Dmitry Goldgof, Rangachar Kasturi, Thao Ho, Yu Sun

Neonates do not have the ability to either articulate pain or communicate it non-verbally by pointing. The current clinical standard for assessing neonatal pain is intermittent and highly subjective. This discontinuity and subjectivity can lead to inconsistent assessment, and therefore, inadequate treatment. In this paper, we propose a multi-channel deep learning framework for assessing neonatal pain from videos. The proposed framework integrates information from two pain indicators or channels, namely facial expression and body movement, using convolutional neural network (CNN). It also integrates temporal information using a recurrent neural network (LSTM). The experimental results prove the efficiency and superiority of the proposed temporal and multi-channel framework as compared to existing similar methods.

* Accepted to IEEE SMC 2019 

  Access Model/Code and Paper