Models, code, and papers for "Audrey G. Chung":

Fully-Automatic Semantic Segmentation for Food Intake Tracking in Long-Term Care Homes

Oct 24, 2019
Kaylen J Pfisterer, Robert Amelard, Audrey G Chung, Braeden Syrnyk, Alexander MacLean, Alexander Wong

Malnutrition impacts quality of life and places annually-recurring burden on the health care system. Half of older adults are at risk for malnutrition in long-term care (LTC). Monitoring and measuring nutritional intake is paramount yet involves time-consuming and subjective visual assessment, limiting current methods' reliability. The opportunity for automatic image-based estimation exists. Some progress outside LTC has been made (e.g., calories consumed, food classification), however, these methods have not been implemented in LTC, potentially due to a lack of ability to independently evaluate automatic segmentation methods within the intake estimation pipeline. Here, we propose and evaluate a novel fully-automatic semantic segmentation method for pixel-level classification of food on a plate using a deep convolutional neural network (DCNN). The macroarchitecture of the DCNN is a multi-scale encoder-decoder food network (EDFN) architecture comprising a residual encoder microarchitecture, a pyramid scene parsing decoder microarchitecture, and a specialized per-pixel food/no-food classification layer. The network was trained and validated on the pre-labelled UNIMIB 2016 food dataset (1027 tray images, 73 categories), and tested on our novel LTC plate dataset (390 plate images, 9 categories). Our fully-automatic segmentation method attained similar intersection over union to the semi-automatic graph cuts (91.2% vs. 93.7%). Advantages of our proposed system include: testing on a novel dataset, decoupled error analysis, no user-initiated annotations, with similar segmentation accuracy and enhanced reliability in terms of types of segmentation errors. This may address several short-comings currently limiting utility of automated food intake tracking in time-constrained LTC and hospital settings.


  Click for Model/Code and Paper
Nature vs. Nurture: The Role of Environmental Resources in Evolutionary Deep Intelligence

Feb 09, 2018
Audrey G. Chung, Paul Fieguth, Alexander Wong

Evolutionary deep intelligence synthesizes highly efficient deep neural networks architectures over successive generations. Inspired by the nature versus nurture debate, we propose a study to examine the role of external factors on the network synthesis process by varying the availability of simulated environmental resources. Experimental results were obtained for networks synthesized via asexual evolutionary synthesis (1-parent) and sexual evolutionary synthesis (2-parent, 3-parent, and 5-parent) using a 10% subset of the MNIST dataset. Results show that a lower environmental factor model resulted in a more gradual loss in performance accuracy and decrease in storage size. This potentially allows significantly reduced storage size with minimal to no drop in performance accuracy, and the best networks were synthesized using the lowest environmental factor models.


  Click for Model/Code and Paper
EdgeSpeechNets: Highly Efficient Deep Neural Networks for Speech Recognition on the Edge

Oct 18, 2018
Zhong Qiu Lin, Audrey G. Chung, Alexander Wong

Despite showing state-of-the-art performance, deep learning for speech recognition remains challenging to deploy in on-device edge scenarios such as mobile and other consumer devices. Recently, there have been greater efforts in the design of small, low-footprint deep neural networks (DNNs) that are more appropriate for edge devices, with much of the focus on design principles for hand-crafting efficient network architectures. In this study, we explore a human-machine collaborative design strategy for building low-footprint DNN architectures for speech recognition through a marriage of human-driven principled network design prototyping and machine-driven design exploration. The efficacy of this design strategy is demonstrated through the design of a family of highly-efficient DNNs (nicknamed EdgeSpeechNets) for limited-vocabulary speech recognition. Experimental results using the Google Speech Commands dataset for limited-vocabulary speech recognition showed that EdgeSpeechNets have higher accuracies than state-of-the-art DNNs (with the best EdgeSpeechNet achieving ~97% accuracy), while achieving significantly smaller network sizes (as much as 7.8x smaller) and lower computational cost (as much as 36x fewer multiply-add operations, 10x lower prediction latency, and 16x smaller memory footprint on a Motorola Moto E phone), making them very well-suited for on-device edge voice interface applications.

* 4 pages 

  Click for Model/Code and Paper
A new take on measuring relative nutritional density: The feasibility of using a deep neural network to assess commercially-prepared pureed food concentrations

Nov 03, 2017
Kaylen J. Pfisterer, Robert Amelard, Audrey G. Chung, Alexander Wong

Dysphagia affects 590 million people worldwide and increases risk for malnutrition. Pureed food may reduce choking, however preparation differences impact nutrient density making quality assurance necessary. This paper is the first study to investigate the feasibility of computational pureed food nutritional density analysis using an imaging system. Motivated by a theoretical optical dilution model, a novel deep neural network (DNN) was evaluated using 390 samples from thirteen types of commercially prepared purees at five dilutions. The DNN predicted relative concentration of the puree sample (20%, 40%, 60%, 80%, 100% initial concentration). Data were captured using same-side reflectance of multispectral imaging data at different polarizations at three exposures. Experimental results yielded an average top-1 prediction accuracy of 92.2+/-0.41% with sensitivity and specificity of 83.0+/-15.0% and 95.0+/-4.8%, respectively. This DNN imaging system for nutrient density analysis of pureed food shows promise as a novel tool for nutrient quality assurance.


  Click for Model/Code and Paper
Discovery Radiomics via Evolutionary Deep Radiomic Sequencer Discovery for Pathologically-Proven Lung Cancer Detection

Oct 20, 2017
Mohammad Javad Shafiee, Audrey G. Chung, Farzad Khalvati, Masoom A. Haider, Alexander Wong

While lung cancer is the second most diagnosed form of cancer in men and women, a sufficiently early diagnosis can be pivotal in patient survival rates. Imaging-based, or radiomics-driven, detection methods have been developed to aid diagnosticians, but largely rely on hand-crafted features which may not fully encapsulate the differences between cancerous and healthy tissue. Recently, the concept of discovery radiomics was introduced, where custom abstract features are discovered from readily available imaging data. We propose a novel evolutionary deep radiomic sequencer discovery approach based on evolutionary deep intelligence. Motivated by patient privacy concerns and the idea of operational artificial intelligence, the evolutionary deep radiomic sequencer discovery approach organically evolves increasingly more efficient deep radiomic sequencers that produce significantly more compact yet similarly descriptive radiomic sequences over multiple generations. As a result, this framework improves operational efficiency and enables diagnosis to be run locally at the radiologist's computer while maintaining detection accuracy. We evaluated the evolved deep radiomic sequencer (EDRS) discovered via the proposed evolutionary deep radiomic sequencer discovery framework against state-of-the-art radiomics-driven and discovery radiomics methods using clinical lung CT data with pathologically-proven diagnostic data from the LIDC-IDRI dataset. The evolved deep radiomic sequencer shows improved sensitivity (93.42%), specificity (82.39%), and diagnostic accuracy (88.78%) relative to previous radiomics approaches.

* 26 pages 

  Click for Model/Code and Paper
ProstateGAN: Mitigating Data Bias via Prostate Diffusion Imaging Synthesis with Generative Adversarial Networks

Nov 21, 2018
Xiaodan Hu, Audrey G. Chung, Paul Fieguth, Farzad Khalvati, Masoom A. Haider, Alexander Wong

Generative Adversarial Networks (GANs) have shown considerable promise for mitigating the challenge of data scarcity when building machine learning-driven analysis algorithms. Specifically, a number of studies have shown that GAN-based image synthesis for data augmentation can aid in improving classification accuracy in a number of medical image analysis tasks, such as brain and liver image analysis. However, the efficacy of leveraging GANs for tackling prostate cancer analysis has not been previously explored. Motivated by this, in this study we introduce ProstateGAN, a GAN-based model for synthesizing realistic prostate diffusion imaging data. More specifically, in order to generate new diffusion imaging data corresponding to a particular cancer grade (Gleason score), we propose a conditional deep convolutional GAN architecture that takes Gleason scores into consideration during the training process. Experimental results show that high-quality synthetic prostate diffusion imaging data can be generated using the proposed ProstateGAN for specified Gleason scores.

* Machine Learning for Health (ML4H) Workshop at NeurIPS 2018 arXiv:1811.07216 

  Click for Model/Code and Paper
Discovery Radiomics for Pathologically-Proven Computed Tomography Lung Cancer Prediction

Mar 28, 2017
Devinder Kumar, Mohammad Javad Shafiee, Audrey G. Chung, Farzad Khalvati, Masoom A. Haider, Alexander Wong

Lung cancer is the leading cause for cancer related deaths. As such, there is an urgent need for a streamlined process that can allow radiologists to provide diagnosis with greater efficiency and accuracy. A powerful tool to do this is radiomics: a high-dimension imaging feature set. In this study, we take the idea of radiomics one step further by introducing the concept of discovery radiomics for lung cancer prediction using CT imaging data. In this study, we realize these custom radiomic sequencers as deep convolutional sequencers using a deep convolutional neural network learning architecture. To illustrate the prognostic power and effectiveness of the radiomic sequences produced by the discovered sequencer, we perform cancer prediction between malignant and benign lesions from 97 patients using the pathologically-proven diagnostic data from the LIDC-IDRI dataset. Using the clinically provided pathologically-proven data as ground truth, the proposed framework provided an average accuracy of 77.52% via 10-fold cross-validation with a sensitivity of 79.06% and specificity of 76.11%, surpassing the state-of-the art method.

* 8 pages 

  Click for Model/Code and Paper
Discovery Radiomics via StochasticNet Sequencers for Cancer Detection

Nov 11, 2015
Mohammad Javad Shafiee, Audrey G. Chung, Devinder Kumar, Farzad Khalvati, Masoom Haider, Alexander Wong

Radiomics has proven to be a powerful prognostic tool for cancer detection, and has previously been applied in lung, breast, prostate, and head-and-neck cancer studies with great success. However, these radiomics-driven methods rely on pre-defined, hand-crafted radiomic feature sets that can limit their ability to characterize unique cancer traits. In this study, we introduce a novel discovery radiomics framework where we directly discover custom radiomic features from the wealth of available medical imaging data. In particular, we leverage novel StochasticNet radiomic sequencers for extracting custom radiomic features tailored for characterizing unique cancer tissue phenotype. Using StochasticNet radiomic sequencers discovered using a wealth of lung CT data, we perform binary classification on 42,340 lung lesions obtained from the CT scans of 93 patients in the LIDC-IDRI dataset. Preliminary results show significant improvement over previous state-of-the-art methods, indicating the potential of the proposed discovery radiomics framework for improving cancer screening and diagnosis.

* 3 pages 

  Click for Model/Code and Paper
Discovery Radiomics for Multi-Parametric MRI Prostate Cancer Detection

Oct 20, 2015
Audrey G. Chung, Mohammad Javad Shafiee, Devinder Kumar, Farzad Khalvati, Masoom A. Haider, Alexander Wong

Prostate cancer is the most diagnosed form of cancer in Canadian men, and is the third leading cause of cancer death. Despite these statistics, prognosis is relatively good with a sufficiently early diagnosis, making fast and reliable prostate cancer detection crucial. As imaging-based prostate cancer screening, such as magnetic resonance imaging (MRI), requires an experienced medical professional to extensively review the data and perform a diagnosis, radiomics-driven methods help streamline the process and has the potential to significantly improve diagnostic accuracy and efficiency, and thus improving patient survival rates. These radiomics-driven methods currently rely on hand-crafted sets of quantitative imaging-based features, which are selected manually and can limit their ability to fully characterize unique prostate cancer tumour phenotype. In this study, we propose a novel \textit{discovery radiomics} framework for generating custom radiomic sequences tailored for prostate cancer detection. Discovery radiomics aims to uncover abstract imaging-based features that capture highly unique tumour traits and characteristics beyond what can be captured using predefined feature models. In this paper, we discover new custom radiomic sequencers for generating new prostate radiomic sequences using multi-parametric MRI data. We evaluated the performance of the discovered radiomic sequencer against a state-of-the-art hand-crafted radiomic sequencer for computer-aided prostate cancer detection with a feedforward neural network using real clinical prostate multi-parametric MRI data. Results for the discovered radiomic sequencer demonstrate good performance in prostate cancer detection and clinical decision support relative to the hand-crafted radiomic sequencer. The use of discovery radiomics shows potential for more efficient and reliable automatic prostate cancer detection.

* 8 pages 

  Click for Model/Code and Paper