Models, code, and papers for "Kamran Ali":

Facial Expression Recognition Using Disentangled Adversarial Learning

Sep 28, 2019
Kamran Ali, Charles E. Hughes

The representation used for Facial Expression Recognition (FER) usually contain expression information along with other variations such as identity and illumination. In this paper, we propose a novel Disentangled Expression learning-Generative Adversarial Network (DE-GAN) to explicitly disentangle facial expression representation from identity information. In this learning by reconstruction method, facial expression representation is learned by reconstructing an expression image employing an encoder-decoder based generator. This expression representation is disentangled from identity component by explicitly providing the identity code to the decoder part of DE-GAN. The process of expression image reconstruction and disentangled expression representation learning is improved by performing expression and identity classification in the discriminator of DE-GAN. The disentangled facial expression representation is then used for facial expression recognition employing simple classifiers like SVM or MLP. The experiments are performed on publicly available and widely used face expression databases (CK+, MMI, Oulu-CASIA). The experimental results show that the proposed technique produces comparable results with state-of-the-art methods.


  Click for Model/Code and Paper
Facial Expression Recognition Using Human to Animated-Character Expression Translation

Oct 12, 2019
Kamran Ali, Ilkin Isler, Charles Hughes

Facial expression recognition is a challenging task due to two major problems: the presence of inter-subject variations in facial expression recognition dataset and impure expressions posed by human subjects. In this paper we present a novel Human-to-Animation conditional Generative Adversarial Network (HA-GAN) to overcome these two problems by using many (human faces) to one (animated face) mapping. Specifically, for any given input human expression image, our HA-GAN transfers the expression information from the input image to a fixed animated identity. Stylized animated characters from the Facial Expression Research Group-Database (FERGDB) are used for the generation of fixed identity. By learning this many-to-one identity mapping function using our proposed HA-GAN, the effect of inter-subject variations can be reduced in Facial Expression Recognition(FER). We also argue that the expressions in the generated animated images are pure expressions and since FER is performed on these generated images, the performance of facial expression recognition is improved. Our initial experimental results on the state-of-the-art datasets show that facial expression recognition carried out on the generated animated images using our HA-GAN framework outperforms the baseline deep neural network and produces comparable or even better results than the state-of-the-art methods for facial expression recognition.

* 8 Pages 

  Click for Model/Code and Paper
Efficient Yet Deep Convolutional Neural Networks for Semantic Segmentation

Jul 28, 2018
Sharif Amit Kamran, Ali Shihab Sabbir

Semantic Segmentation using deep convolutional neural network pose more complex challenge for any GPU intensive task. As it has to compute million of parameters, it results to huge memory consumption. Moreover, extracting finer features and conducting supervised training tends to increase the complexity. With the introduction of Fully Convolutional Neural Network, which uses finer strides and utilizes deconvolutional layers for upsampling, it has been a go to for any image segmentation task. In this paper, we propose two segmentation architecture which not only needs one-third the parameters to compute but also gives better accuracy than the similar architectures. The model weights were transferred from the popular neural net like VGG19 and VGG16 which were trained on Imagenet classification data-set. Then we transform all the fully connected layers to convolutional layers and use dilated convolution for decreasing the parameters. Lastly, we add finer strides and attach four skip architectures which are element-wise summed with the deconvolutional layers in steps. We train and test on different sparse and fine data-sets like Pascal VOC2012, Pascal-Context and NYUDv2 and show how better our model performs in this tasks. On the other hand our model has a faster inference time and consumes less memory for training and testing on NVIDIA Pascal GPUs, making it more efficient and less memory consuming architecture for pixel-wise segmentation.

* 8 pages 

  Click for Model/Code and Paper
Total Recall: Understanding Traffic Signs using Deep Hierarchical Convolutional Neural Networks

Oct 26, 2018
Sourajit Saha, Sharif Amit Kamran, Ali Shihab Sabbir

Recognizing Traffic Signs using intelligent systems can drastically reduce the number of accidents happening world-wide. With the arrival of Self-driving cars it has become a staple challenge to solve the automatic recognition of Traffic and Hand-held signs in the major streets. Various machine learning techniques like Random Forest, SVM as well as deep learning models has been proposed for classifying traffic signs. Though they reach state-of-the-art performance on a particular data-set, but fall short of tackling multiple Traffic Sign Recognition benchmarks. In this paper, we propose a novel and one-for-all architecture that aces multiple benchmarks with better overall score than the state-of-the-art architectures. Our model is made of residual convolutional blocks with hierarchical dilated skip connections joined in steps. With this we score 99.33% Accuracy in German sign recognition benchmark and 99.17% Accuracy in Belgian traffic sign classification benchmark. Moreover, we propose a newly devised dilated residual learning representation technique which is very low in both memory and computational complexity.


  Click for Model/Code and Paper
Optic-Net: A Novel Convolutional Neural Network for Diagnosis of Retinal Diseases from Optical Tomography Images

Oct 13, 2019
Sharif Amit Kamran, Sourajit Saha, Ali Shihab Sabbir, Alireza Tavakkoli

Diagnosing different retinal diseases from Spectral Domain Optical Coherence Tomography (SD-OCT) images is a challenging task. Different automated approaches such as image processing, machine learning and deep learning algorithms have been used for early detection and diagnosis of retinal diseases. Unfortunately, these are prone to error and computational inefficiency, which requires further intervention from human experts. In this paper, we propose a novel convolution neural network architecture to successfully distinguish between different degeneration of retinal layers and their underlying causes. The proposed novel architecture outperforms other classification models while addressing the issue of gradient explosion. Our approach reaches near perfect accuracy of 99.8% and 100% for two separately available Retinal SD-OCT data-set respectively. Additionally, our architecture predicts retinal diseases in real time while outperforming human diagnosticians.

* 8 pages. Accepted to 18th IEEE International Conference on Machine Learning and Applications (ICMLA 2019) 

  Click for Model/Code and Paper
Brain MRI Segmentation using Rule-Based Hybrid Approach

Feb 12, 2019
Mustansar Fiaz, Kamran Ali, Abdul Rehman, M. Junaid Gul, Soon Ki Jung

Medical image segmentation being a substantial component of image processing plays a significant role to analyze gross anatomy, to locate an infirmity and to plan the surgical procedures. Segmentation of brain Magnetic Resonance Imaging (MRI) is of considerable importance for the accurate diagnosis. However, precise and accurate segmentation of brain MRI is a challenging task. Here, we present an efficient framework for segmentation of brain MR images. For this purpose, Gabor transform method is used to compute features of brain MRI. Then, these features are classified by using four different classifiers i.e., Incremental Supervised Neural Network (ISNN), K-Nearest Neighbor (KNN), Probabilistic Neural Network (PNN), and Support Vector Machine (SVM). Performance of these classifiers is investigated over different images of brain MRI and the variation in the performance of these classifiers is observed for different brain tissues. Thus, we proposed a rule-based hybrid approach to segment brain MRI. Experimental results show that the performance of these classifiers varies over each tissue MRI and the proposed rule-based hybrid approach exhibits better segmentation of brain MRI tissues.

* 8 figures 

  Click for Model/Code and Paper
Diagnosis of Celiac Disease and Environmental Enteropathy on Biopsy Images Using Color Balancing on Convolutional Neural Networks

Apr 24, 2019
Kamran Kowsari, Rasoul Sali, Marium N. Khan, William Adorno, S. Asad Ali, Sean R. Moore, Beatrice C. Amadi, Paul Kelly, Sana Syed, Donald E. Brown

Celiac Disease (CD) and Environmental Enteropathy (EE) are common causes of malnutrition and adversely impact normal childhood development. CD is an autoimmune disorder that is prevalent worldwide and is caused by an increased sensitivity to gluten. Gluten exposure destructs the small intestinal epithelial barrier, resulting in nutrient mal-absorption and childhood under-nutrition. EE also results in barrier dysfunction but is thought to be caused by an increased vulnerability to infections. EE has been implicated as the predominant cause of under-nutrition, oral vaccine failure, and impaired cognitive development in low-and-middle-income countries. Both conditions require a tissue biopsy for diagnosis, and a major challenge of interpreting clinical biopsy images to differentiate between these gastrointestinal diseases is striking histopathologic overlap between them. In the current study, we propose a convolutional neural network (CNN) to classify duodenal biopsy images from subjects with CD, EE, and healthy controls. We evaluated the performance of our proposed model using a large cohort containing 1000 biopsy images. Our evaluations show that the proposed model achieves an area under ROC of 0.99, 1.00, and 0.97 for CD, EE, and healthy controls, respectively. These results demonstrate the discriminative power of the proposed model in duodenal biopsies classification.


  Click for Model/Code and Paper
Comparison three methods of clustering: k-means, spectral clustering and hierarchical clustering

Nov 13, 2014
Kamran Kowsari

Comparison of three kind of the clustering and find cost function and loss function and calculate them. Error rate of the clustering methods and how to calculate the error percentage always be one on the important factor for evaluating the clustering methods, so this paper introduce one way to calculate the error rate of clustering methods. Clustering algorithms can be divided into several categories including partitioning clustering algorithms, hierarchical algorithms and density based algorithms. Generally speaking we should compare clustering algorithms by Scalability, Ability to work with different attribute, Clusters formed by conventional, Having minimal knowledge of the computer to recognize the input parameters, Classes for dealing with noise and extra deposition that same error rate for clustering a new data, Thus, there is no effect on the input data, different dimensions of high levels, K-means is one of the simplest approach to clustering that clustering is an unsupervised problem.

* This paper has been withdrawn by the author due to improve add more results 

  Click for Model/Code and Paper
A Brief Introduction to Temporality and Causality

Jul 14, 2010
Kamran Karimi

Causality is a non-obvious concept that is often considered to be related to temporality. In this paper we present a number of past and present approaches to the definition of temporality and causality from philosophical, physical, and computational points of view. We note that time is an important ingredient in many relationships and phenomena. The topic is then divided into the two main areas of temporal discovery, which is concerned with finding relations that are stretched over time, and causal discovery, where a claim is made as to the causal influence of certain events on others. We present a number of computational tools used for attempting to automatically discover temporal and causal relations in data.


  Click for Model/Code and Paper
Concurrent Flow-Based Localization and Mapping in Time-Invariant Flow Fields

Oct 15, 2019
Zhuoyuan Song, Kamran Mohseni

We present the concept of concurrent flow-based localization and mapping (FLAM) for autonomous field robots navigating within background flows. Different from the classical simultaneous localization and mapping (SLAM) problem, where the robot interacts with discrete features, FLAM utilizes the continuous flow fields as navigation references for mobile robots and provides flow field mapping capability with in-situ flow velocity observations. This approach is of importance to underwater vehicles in mid-depth oceans or aerial vehicles in GPS-denied atmospheric circulations. This article introduces the formulation of FLAM as a full SLAM solution motivated by the feature-based GraphSLAM framework. The performance of FLAM was demonstrated through simulation within artificial flow fields that represent typical geophysical circulation phenomena: a steady single-gyre flow field and a double-gyre flow field with unsteady turbulent perturbations. The results indicate that FLAM provides significant improvements in the robots' localization accuracy and a consistent approximation of the background flow field. It is also shown that FLAM leads to smooth robot trajectory estimates.


  Click for Model/Code and Paper
Sentiment Classification of Customer Reviews about Automobiles in Roman Urdu

Dec 30, 2018
Moin Khan, Kamran Malik

Text mining is a broad field having sentiment mining as its important constituent in which we try to deduce the behavior of people towards a specific item, merchandise, politics, sports, social media comments, review sites etc. Out of many issues in sentiment mining, analysis and classification, one major issue is that the reviews and comments can be in different languages like English, Arabic, Urdu etc. Handling each language according to its rules is a difficult task. A lot of research work has been done in English Language for sentiment analysis and classification but limited sentiment analysis work is being carried out on other regional languages like Arabic, Urdu and Hindi. In this paper, Waikato Environment for Knowledge Analysis (WEKA) is used as a platform to execute different classification models for text classification of Roman Urdu text. Reviews dataset has been scrapped from different automobiles sites. These extracted Roman Urdu reviews, containing 1000 positive and 1000 negative reviews, are then saved in WEKA attribute-relation file format (arff) as labeled examples. Training is done on 80% of this data and rest of it is used for testing purpose which is done using different models and results are analyzed in each case. The results show that Multinomial Naive Bayes outperformed Bagging, Deep Neural Network, Decision Tree, Random Forest, AdaBoost, k-NN and SVM Classifiers in terms of more accuracy, precision, recall and F-measure.

* Advances in Intelligent Systems and Computing, vol 887 (2018) 630-640 
* This is a pre-print of a contribution published in Advances in Intelligent Systems and Computing (editors: Kohei Arai, Supriya Kapoor and Rahul Bhatia) published by Springer, Cham. The final authenticated version is available online at: https://doi.org/10.1007/978-3-030-03405-4_44 

  Click for Model/Code and Paper
Long-Term Inertial Navigation Aided by Dynamics of Flow Field Features

Oct 13, 2017
Zhuoyuan Song, Kamran Mohseni

A current-aided inertial navigation framework is proposed for small autonomous underwater vehicles in long-duration operations (> 1 hour), where neither frequent surfacing nor consistent bottom-tracking are available. We instantiate this concept through mid-depth, underwater navigation. This strategy mitigates dead-reckoning uncertainty of a traditional inertial navigation system by comparing the estimate of local, ambient flow velocity with preloaded ocean current maps. The proposed navigation system is implemented through a marginalized particle filter where the vehicle's states are sequentially tracked along with sensor bias and local turbulence that is not resolved by general flow prediction. The performance of the proposed approach is first analyzed through Monte Carlo simulations in two artificial background flow fields, resembling real-world ocean circulation patterns, superposed with smaller-scale, turbulent components with Kolmogorov energy spectrum. The current-aided navigation scheme significantly improves the dead-reckoning performance of the vehicle even when unresolved, small-scale flow perturbations are present. For a 6-hour navigation with an automotive-grade inertial navigation system, the current-aided navigation scheme results in positioning estimates with under 3% uncertainty per distance traveled (UDT) in a turbulent, double-gyre flow field, and under 7.3% UDT in a turbulent, meandering jet flow field. Further evaluation with field test data and actual ocean simulation analysis demonstrates consistent performance for a 6-hour mission, positioning result with under 25% UDT for a 24-hour navigation when provided direct heading measurements, and terminal positioning estimate with 16% UDT at the cost of increased uncertainty at an early stage of the navigation.

* Accepted for publication in IEEE Journal of Oceanic Engineering 

  Click for Model/Code and Paper
Development of a Low-Cost Experimental Quadcopter Testbed Using an Arduino Controller and Software

Aug 20, 2015
Ankyda Ji, Kamran Turkoglu

This paper explains the integration process of an autonomous quadcopter platform and the design of Arduino based novel software architecture that enables the execution of advanced control laws on low-cost off-the-shelf products based frameworks. Here, quadcopter dynamics are explored through the classical nonlinear equations of motion. Next, quadcopter is designed, built and assembled using off-the-shelf, low-cost products to carry a camera payload which is mainly utilized for any type of surveillance missions. System identification of the quadcopter dynamics is accomplished through the use of sweep data and $CIFER^{\circledR}$ to obtain the dynamic model. The unstable, non-linear quadcopter dynamics are stabilized using a generic control algorithm through the novel Arduino based software architecture. Experimental results demonstrate the validation of the integration and the novel software package running on an Arduino board to control autonomous quadcopter flights.

* (under review in SAGE Transactions of the Institute of Measurement and Control) 

  Click for Model/Code and Paper
AKM$^2$D : An Adaptive Framework for Online Sensing and Anomaly Quantification

Oct 04, 2019
Hao Yan, Kamran Paynabar, Jianjun Shi

In point-based sensing systems such as coordinate measuring machines (CMM) and laser ultrasonics where complete sensing is impractical due to the high sensing time and cost, adaptive sensing through a systematic exploration is vital for online inspection and anomaly quantification. Most of the existing sequential sampling methodologies focus on reducing the overall fitting error for the entire sampling space. However, in many anomaly quantification applications, the main goal is to estimate sparse anomalous regions in the pixel-level accurately. In this paper, we develop a novel framework named Adaptive Kernelized Maximum-Minimum Distance AKM$^2$D to speed up the inspection and anomaly detection process through an intelligent sequential sampling scheme integrated with fast estimation and detection. The proposed method balances the sampling efforts between the space-filling sampling (exploration) and focused sampling near the anomalous region (exploitation). The proposed methodology is validated by conducting simulations and a case study of anomaly detection in composite sheets using a guided wave test.

* Under review in IISE Transaction 

  Click for Model/Code and Paper
Large Multistream Data Analytics for Monitoring and Diagnostics in Manufacturing Systems

Dec 26, 2018
Samaneh Ebrahimi, Chitta Ranjan, Kamran Paynabar

The high-dimensionality and volume of large scale multistream data has inhibited significant research progress in developing an integrated monitoring and diagnostics (M&D) approach. This data, also categorized as big data, is becoming common in manufacturing plants. In this paper, we propose an integrated M\&D approach for large scale streaming data. We developed a novel monitoring method named Adaptive Principal Component monitoring (APC) which adaptively chooses PCs that are most likely to vary due to the change for early detection. Importantly, we integrate a novel diagnostic approach, Principal Component Signal Recovery (PCSR), to enable a streamlined SPC. This diagnostics approach draws inspiration from Compressed Sensing and uses Adaptive Lasso for identifying the sparse change in the process. We theoretically motivate our approaches and do a performance evaluation of our integrated M&D method through simulations and case studies.


  Click for Model/Code and Paper
Structured Point Cloud Data Analysis via Regularized Tensor Regression for Process Modeling and Optimization

Jul 30, 2018
Hao Yan, Kamran Paynabar, Massimo Pacella

Advanced 3D metrology technologies such as Coordinate Measuring Machine (CMM) and laser 3D scanners have facilitated the collection of massive point cloud data, beneficial for process monitoring, control and optimization. However, due to their high dimensionality and structure complexity, modeling and analysis of point clouds are still a challenge. In this paper, we utilize multilinear algebra techniques and propose a set of tensor regression approaches to model the variational patterns of point clouds and to link them to process variables. The performance of the proposed methods is evaluated through simulations and a real case study of turning process optimization.

* Technometrics, submitted 

  Click for Model/Code and Paper
Sequence Graph Transform (SGT): A Feature Extraction Function for Sequence Data Mining (Extended Version)

Apr 30, 2017
Chitta Ranjan, Samaneh Ebrahimi, Kamran Paynabar

The ubiquitous presence of sequence data across fields such as the web, healthcare, bioinformatics, and text mining has made sequence mining a vital research area. However, sequence mining is particularly challenging because of difficulty in finding (dis)similarity/distance between sequences. This is because a distance measure between sequences is not obvious due to their unstructuredness---arbitrary strings of arbitrary length. Feature representations, such as n-grams, are often used but they either compromise on extracting both short- and long-term sequence patterns or have a high computation. We propose a new function, Sequence Graph Transform (SGT), that extracts the short- and long-term sequence features and embeds them in a finite-dimensional feature space. Importantly, SGT has low computation and can extract any amount of short- to long-term patterns without any increase in the computation, also proved theoretically in this paper. Due to this, SGT yields superior result with significantly higher accuracy and lower computation compared to the existing methods. We show it via several experimentation and SGT's real world application for clustering, classification, search and visualization as examples.


  Click for Model/Code and Paper
Point-Cloud-Based Aerial Fragmentation Analysis for Application in the Minerals Industry

Jul 26, 2017
Thomas Bamford, Kamran Esmaeili, Angela P. Schoellig

This work investigates the application of Unmanned Aerial Vehicle (UAV) technology for measurement of rock fragmentation without placement of scale objects in the scene to determine image scale. Commonly practiced image-based rock fragmentation analysis requires a technician to walk to a rock pile, place a scale object of known size in the area of interest, and capture individual 2D images. Our previous work has used UAV technology for the first time to acquire real-time rock fragmentation data and has shown comparable quality of results; however, it still required the (potentially dangerous) placement of scale objects, and continued to make the assumption that the rock pile surface is planar and that the scale objects lie on the surface plane. This work improves our UAV-based approach to enable rock fragmentation measurement without placement of scale objects and without the assumption of planarity. This is achieved by first generating a point cloud of the rock pile from 2D images, taking into account intrinsic and extrinsic camera parameters, and then taking 2D images for fragmentation analysis. This work represents an important step towards automating post-blast rock fragmentation analysis. In experiments, a rock pile with known size distribution was photographed by the UAV with and without using scale objects. For fragmentation analysis without scale objects, a point cloud of the rock pile was generated and used to compute image scale. Comparison of the rock size distributions show that this point-cloud-based method enables producing measurements with better or comparable accuracy (within 10% of the ground truth) to the manual method with scale objects.

* 8 pages, 9 figures 

  Click for Model/Code and Paper
A real-time analysis of rock fragmentation using UAV technology

Jul 14, 2016
Thomas Bamford, Kamran Esmaeili, Angela P. Schoellig

Accurate measurement of blast-induced rock fragmentation is of great importance for many mining operations. The post-blast rock size distribution can significantly influence the efficiency of all the downstream mining and comminution processes. Image analysis methods are one of the most common methods used to measure rock fragment size distribution in mines regardless of criticism for lack of accuracy to measure fine particles and other perceived deficiencies. The current practice of collecting rock fragmentation data for image analysis is highly manual and provides data with low temporal and spatial resolution. Using UAVs for collecting images of rock fragments can not only improve the quality of the image data but also automate the data collection process. Ultimately, real-time acquisition of high temporal- and spatial-resolution data based on UAV technology will provide a broad range of opportunities for both improving blast design without interrupting the production process and reducing the cost of the human operator. This paper presents the results of a series of laboratory-scale rock fragment measurements using a quadrotor UAV equipped with a camera. The goal of this work is to highlight the benefits of aerial fragmentation analysis in terms of both prediction accuracy and time effort. A pile of rock fragments with different fragment sizes was placed in a lab that is equipped with a motion capture camera system for precise UAV localization and control. Such an environment presents optimal conditions for UAV flight and thus, is well-suited for conducting proof-of-concept experiments before testing them in large-scale field experiments. The pile was photographed by a camera attached to the UAV, and the particle size distribution curves were generated in almost real-time. The pile was also manually photographed and the results of the manual method were compared to the UAV method.

* 12 pages, 12 figures, 6th International Conference on Computer Applications in the Minerals Industries 

  Click for Model/Code and Paper