Models, code, and papers for "Waleed A":

Assessment of Multiple-Biomarker Classifiers: fundamental principles and a proposed strategy

Oct 30, 2019
Waleed A. Yousef

The multiple-biomarker classifier problem and its assessment are reviewed against the background of some fundamental principles from the field of statistical pattern recognition, machine learning, or the recently so-called "data science". A narrow reading of that literature has led many authors to neglect the contribution to the total uncertainty of performance assessment from the finite training sample. Yet the latter is a fundamental indicator of the stability of a classifier; thus its neglect may be contributing to the problematic status of many studies. A three-level strategy is proposed for moving forward in this field. The lowest level is that of construction, where candidate features are selected and the choice of classifier architecture is made. At that point, the effective dimensionality of the classifier is estimated and used to size the next level of analysis, a pilot study on previously unseen cases. The total (training and testing) uncertainty resulting from the pilot study is, in turn, used to size the highest level of analysis, a pivotal study with a target level of uncertainty. Some resources available in the literature for implementing this approach are reviewed. Although the concepts explained in the present article may be fundamental and straightforward for many researchers in the machine learning community they are subtle for many practitioners, for whom we provided a general advice for the best practice in \cite{Shi2010MAQCII} and elaborate here in the present paper.


  Click for Model/Code and Paper
Prudence When Assuming Normality: an advice for machine learning practitioners

Aug 14, 2019
Waleed A. Yousef

In a binary classification problem the feature vector (predictor) is the input to a scoring function that produces a decision value (score), which is compared to a particular chosen threshold to provide a final class prediction (output). Although the normal assumption of the scoring function is important in many applications, sometimes it is severely violated even under the simple multinormal assumption of the feature vector. This article proves this result mathematically with a counter example to provide an advice for practitioners to avoid blind assumptions of normality. On the other hand, the article provides a set of experiments that illustrate some of the expected and well-behaved results of the Area Under the ROC curve (AUC) under the multinormal assumption of the feature vector. Therefore, the message of the article is not to avoid the normal assumption of either the input feature vector or the output scoring function; however, a prudence is needed when adopting either of both.


  Click for Model/Code and Paper
Estimating the Standard Error of Cross-Validation-Based Estimators of Classification Rules Performance

Aug 01, 2019
Waleed A. Yousef

First, we analyze the variance of the Cross Validation (CV)-based estimators used for estimating the performance of classification rules. Second, we propose a novel estimator to estimate this variance using the Influence Function (IF) approach that had been used previously very successfully to estimate the variance of the bootstrap-based estimators. The motivation for this research is that, as the best of our knowledge, the literature lacks a rigorous method for estimating the variance of the CV-based estimators. What is available is a set of ad-hoc procedures that have no mathematical foundation since they ignore the covariance structure among dependent random variables. The conducted experiments show that the IF proposed method has small RMS error with some bias. However, surprisingly, the ad-hoc methods still work better than the IF-based method. Unfortunately, this is due to the lack of enough smoothness if compared to the bootstrap estimator. This opens the research for three points: (1) more comprehensive simulation study to clarify when the IF method win or loose; (2) more mathematical analysis to figure out why the ad-hoc methods work well; and (3) more mathematical treatment to figure out the connection between the appropriate amount of "smoothness" and decreasing the bias of the IF method.


  Click for Model/Code and Paper
A Leisurely Look at Versions and Variants of the Cross Validation Estimator

Jul 31, 2019
Waleed A. Yousef

Many versions of cross-validation (CV) exist in the literature; and each version though has different variants. All are used interchangeably by many practitioners; yet, without explanation to the connection or difference among them. This article has three contributions. First, it starts by mathematical formalization of these different versions and variants that estimate the error rate and the Area Under the ROC Curve (AUC) of a classification rule, to show the connection and difference among them. Second, we prove some of their properties and prove that many variants are either redundant or "not smooth". Hence, we suggest to abandon all redundant versions and variants and only keep the leave-one-out, the $K$-fold, and the repeated $K$-fold. We show that the latter is the only among the three versions that is "smooth" and hence looks mathematically like estimating the mean performance of the classification rules. However, empirically, for the known phenomenon of "weak correlation", which we explain mathematically and experimentally, it estimates both conditional and mean performance almost with the same accuracy. Third, we conclude the article with suggesting two research points that may answer the remaining question of whether we can come up with a finalist among the three estimators: (1) a comparative study, that is much more comprehensive than those available in literature and conclude no overall winner, is needed to consider a wide range of distributions, datasets, and classifiers including complex ones obtained via the recent deep learning approach. (2) we sketch the path of deriving a rigorous method for estimating the variance of the only "smooth" version, repeated $K$-fold CV, rather than those ad-hoc methods available in the literature that ignore the covariance structure among the folds of CV.


  Click for Model/Code and Paper
AUC: Nonparametric Estimators and Their Smoothness

Jul 30, 2019
Waleed A. Yousef

Nonparametric estimation of a statistic, in general, and of the error rate of a classification rule, in particular, from just one available dataset through resampling is well mathematically founded in the literature using several versions of bootstrap and influence function. This article first provides a concise review of this literature to establish the theoretical framework that we use to construct, in a single coherent framework, nonparametric estimators of the AUC (a two-sample statistic) other than the error rate (a one-sample statistic). In addition, the smoothness of some of these estimators is well investigated and explained. Our experiments show that the behavior of the designed AUC estimators confirms the findings of the literature for the behavior of error rate estimators in many aspects including: the weak correlation between the bootstrap-based estimators and the true conditional AUC; and the comparable accuracy of the different versions of the bootstrap estimators in terms of the RMS with little superiority of the .632+ bootstrap estimator.


  Click for Model/Code and Paper
A Review of Statistical Learning Machines from ATR to DNA Microarrays: design, assessment, and advice for practitioners

Jun 25, 2019
Waleed A. Yousef

Statistical Learning is the process of estimating an unknown probabilistic input-output relationship of a system using a limited number of observations; and a statistical learning machine (SLM) is the machine that learned such a process. While their roots grow deeply in Probability Theory, SLMs are ubiquitous in the modern world. Automatic Target Recognition (ATR) in military applications, Computer Aided Diagnosis (CAD) in medical imaging, DNA microarrays in Genomics, Optical Character Recognition (OCR), Speech Recognition (SR), spam email filtering, stock market prediction, etc., are few examples and applications for SLM; diverse fields but one theory. The field of Statistical Learning can be decomposed to two basic subfields, Design and Assessment. Three main groups of specializations-namely statisticians, engineers, and computer scientists (ordered ascendingly by programming capabilities and descendingly by mathematical rigor)-exist on the venue of this field and each takes its elephant bite. Exaggerated rigorous analysis of statisticians sometimes deprives them from considering new ML techniques and methods that, yet, have no "complete" mathematical theory. On the other hand, immoderate add-hoc simulations of computer scientists sometimes derive them towards unjustified and immature results. A prudent approach is needed that has the enough flexibility to utilize simulations and trials and errors without sacrificing any rigor. If this prudent attitude is necessary for this field it is necessary, as well, in other fields of Engineering.

* This manuscript was composed in 2006 as part of a the author's Ph.D. dissertation 

  Click for Model/Code and Paper
Nested Cavity Classifier: performance and remedy

Aug 08, 2019
Waleed A. Mustafa, Waleed A. Yousef

Nested Cavity Classifier (NCC) is a classification rule that pursues partitioning the feature space, in parallel coordinates, into convex hulls to build decision regions. It is claimed in some literatures that this geometric-based classifier is superior to many others, particularly in higher dimensions. First, we give an example on how NCC can be inefficient, then motivate a remedy by combining the NCC with the Linear Discriminant Analysis (LDA) classifier. We coin the term Nested Cavity Discriminant Analysis (NCDA) for the resulting classifier. Second, a simulation study is conducted to compare both, NCC and NCDA to another two basic classifiers, Linear and Quadratic Discriminant Analysis. NCC alone proves to be inferior to others, while NCDA always outperforms NCC and competes with LDA and QDA.

* This manuscript was composed in 2009 as part of a research pursued that time 

  Click for Model/Code and Paper
VolMap: A Real-time Model for Semantic Segmentation of a LiDAR surrounding view

Jun 12, 2019
Hager Radi, Waleed Ali

This paper introduces VolMap, a real-time approach for the semantic segmentation of a 3D LiDAR surrounding view system in autonomous vehicles. We designed an optimized deep convolution neural network that can accurately segment the point cloud produced by a 360\degree{} LiDAR setup, where the input consists of a volumetric bird-eye view with LiDAR height layers used as input channels. We further investigated the usage of multi-LiDAR setup and its effect on the performance of the semantic segmentation task. Our evaluations are carried out on a large scale 3D object detection benchmark containing a LiDAR cocoon setup, along with KITTI dataset, where the per-point segmentation labels are derived from 3D bounding boxes. We show that VolMap achieved an excellent balance between high accuracy and real-time running on CPU.

* Accepted at Thirty-sixth International Conference on Machine Learning (ICML 2019) Workshop on AI for Autonomous Driving 

  Click for Model/Code and Paper
Matlab vs. OpenCV: A Comparative Study of Different Machine Learning Algorithms

May 03, 2019
Ahmed A. Elsayed, Waleed A. Yousef

Scientific Computing relies on executing computer algorithms coded in some programming languages. Given a particular available hardware, algorithms speed is a crucial factor. There are many scientific computing environments used to code such algorithms. Matlab is one of the most tremendously successful and widespread scientific computing environments that is rich of toolboxes, libraries, and data visualization tools. OpenCV is a (C++)-based library written primarily for Computer Vision and its related areas. This paper presents a comparative study using 20 different real datasets to compare the speed of Matlab and OpenCV for some Machine Learning algorithms. Although Matlab is more convenient in developing and data presentation, OpenCV is much faster in execution, where the speed ratio reaches more than 80 in some cases. The best of two worlds can be achieved by exploring using Matlab or similar environments to select the most successful algorithm; then, implementing the selected algorithm using OpenCV or similar environments to gain a speed factor.

* Work done 2012; but just submitted to arxiv 2019 

  Click for Model/Code and Paper
Review: Metaheuristic Search-Based Fuzzy Clustering Algorithms

Jan 21, 2018
Waleed Alomoush, Ayat Alrosan

Fuzzy clustering is a famous unsupervised learning method used to collecting similar data elements within cluster according to some similarity measurement. But, clustering algorithms suffer from some drawbacks. Among the main weakness including, selecting the initial cluster centres and the appropriate clusters number is normally unknown. These weaknesses are considered the most challenging tasks in clustering algorithms. This paper introduces a comprehensive review of metahueristic search to solve fuzzy clustering algorithms problems.


  Click for Model/Code and Paper
Automatic Segmentation of Retinal Vasculature

Jul 19, 2017
Renoh Johnson Chalakkal, Waleed Abdulla

Segmentation of retinal vessels from retinal fundus images is the key step in the automatic retinal image analysis. In this paper, we propose a new unsupervised automatic method to segment the retinal vessels from retinal fundus images. Contrast enhancement and illumination correction are carried out through a series of image processing steps followed by adaptive histogram equalization and anisotropic diffusion filtering. This image is then converted to a gray scale using weighted scaling. The vessel edges are enhanced by boosting the detail curvelet coefficients. Optic disk pixels are removed before applying fuzzy C-mean classification to avoid the misclassification. Morphological operations and connected component analysis are applied to obtain the segmented retinal vessels. The performance of the proposed method is evaluated using DRIVE database to be able to compare with other state-of-art supervised and unsupervised methods. The overall segmentation accuracy of the proposed method is 95.18% which outperforms the other algorithms.

* IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP), page: 886-890, 2017 
* Published at IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP), 2017 

  Click for Model/Code and Paper
Analysis of a chaotic spiking neural model: The NDS neuron

Aug 16, 2014
Mohammad Alhawarat, Waleed Nazih, Mohammad Eldesouki

Further analysis and experimentation is carried out in this paper for a chaotic dynamic model, viz. the Nonlinear Dynamic State neuron (NDS). The analysis and experimentations are performed to further understand the underlying dynamics of the model and enhance it as well. Chaos provides many interesting properties that can be exploited to achieve computational tasks. Such properties are sensitivity to initial conditions, space filling, control and synchronization.Chaos might play an important role in information processing tasks in human brain as suggested by biologists. If artificial neural networks (ANNs) is equipped with chaos then it will enrich the dynamic behaviours of such networks. The NDS model has some limitations and can be overcome in different ways. In this paper different approaches are followed to push the boundaries of the NDS model in order to enhance it. One way is to study the effects of scaling the parameters of the chaotic equations of the NDS model and study the resulted dynamics. Another way is to study the method that is used in discretization of the original R\"{o}ssler that the NDS model is based on. These approaches have revealed some facts about the NDS attractor and suggest why such a model can be stabilized to large number of unstable periodic orbits (UPOs) which might correspond to memories in phase space.

* Computer Science & Information Technology, 2013, Vol. 4 No. 3 

  Click for Model/Code and Paper
Studying a Chaotic Spiking Neural Model

Oct 26, 2013
Mohammad Alhawarat, Waleed Nazih, Mohammad Eldesouki

Dynamics of a chaotic spiking neuron model are being studied mathematically and experimentally. The Nonlinear Dynamic State neuron (NDS) is analysed to further understand the model and improve it. Chaos has many interesting properties such as sensitivity to initial conditions, space filling, control and synchronization. As suggested by biologists, these properties may be exploited and play vital role in carrying out computational tasks in human brain. The NDS model has some limitations; in thus paper the model is investigated to overcome some of these limitations in order to enhance the model. Therefore, the models parameters are tuned and the resulted dynamics are studied. Also, the discretization method of the model is considered. Moreover, a mathematical analysis is carried out to reveal the underlying dynamics of the model after tuning of its parameters. The results of the aforementioned methods revealed some facts regarding the NDS attractor and suggest the stabilization of a large number of unstable periodic orbits (UPOs) which might correspond to memories in phase space.

* International Journal of Artificial Intelligence & Applications (IJAIA), Vol. 4, No. 5, September 2013 
* Journal, 13 pages 

  Click for Model/Code and Paper
Prediction of Missing Semantic Relations in Lexical-Semantic Network using Random Forest Classifier

Nov 12, 2019
Kévin Cousot, Mehdi Mirzapour, Waleed Ragheb

This study focuses on the prediction of missing six semantic relations (such as is_a and has_part) between two given nodes in RezoJDM a French lexical-semantic network. The output of this prediction is a set of pairs in which the first entries are semantic relations and the second entries are the probabilities of existence of such relations. Due to the statement of the problem we choose the random forest (RF) predictor classifier approach to tackle this problem. We take for granted the existing semantic relations, for training/test dataset, gathered and validated by crowdsourcing. We describe how all of the mentioned ideas can be followed after using the node2vec approach in the feature extraction phase. We show how this approach can lead to acceptable results.

* CJC PRAXILING 2019, Nov 2019, Montpellier, France 

  Click for Model/Code and Paper
State Space System Modelling of a Quad Copter UAV

Sep 13, 2019
Zaid Tahir, Waleed Tahir, Saad Ali Liaqat

In this paper, a linear mathematical model for a quad copter unmanned aerial vehicle (UAV) is derived. The three degrees of freedom (3DOF) and six degrees of freedom (6DOF) quad copter state-space models are developed starting from basic Newtonian equations. These state space models are very important to control the quad copter system which is inherently dynamically unstable.

* This is the full version of our paper "State Space System Modelling of a Quad Copter UAV" 

  Click for Model/Code and Paper
State Space System Modeling of a Quad Copter UAV

Aug 20, 2019
Zaid Tahir, Waleed Tahir, Saad Ali Liaqat

In this paper, a linear mathematical model for a quad copter unmanned aerial vehicle (UAV) is derived. The three degrees of freedom (3DOF) and six degrees of freedom (6DOF) quad copter state-space models are developed starting from basic Newtonian equations. These state space models are very important to control the quad copter system which is inherently dynamically unstable.

* Indian Journal of Science and Technology, 9(25), 10-17485 (2016) 
* This is the full version of the paper 

  Click for Model/Code and Paper
Improving Distant Supervision with Maxpooled Attention and Sentence-Level Supervision

Oct 30, 2018
Iz Beltagy, Kyle Lo, Waleed Ammar

We propose an effective multitask learning setup for reducing distant supervision noise by leveraging sentence-level supervision. We show how sentence-level supervision can be used to improve the encoding of individual sentences, and to learn which input sentences are more likely to express the relationship between a pair of entities. We also introduce a novel neural architecture for collecting signals from multiple input sentences, which combines the benefits of attention and maxpooling. The proposed method increases AUC by 10% (from 0.261 to 0.284), and outperforms recently published results on the FB-NYT dataset.


  Click for Model/Code and Paper
Conditional Random Field Autoencoders for Unsupervised Structured Prediction

Nov 10, 2014
Waleed Ammar, Chris Dyer, Noah A. Smith

We introduce a framework for unsupervised learning of structured predictors with overlapping, global features. Each input's latent representation is predicted conditional on the observable data using a feature-rich conditional random field. Then a reconstruction of the input is (re)generated, conditional on the latent structure, using models for which maximum likelihood estimation has a closed-form. Our autoencoder formulation enables efficient learning without making unrealistic independence assumptions or restricting the kinds of features that can be used. We illustrate insightful connections to traditional autoencoders, posterior regularization and multi-view learning. We show competitive results with instantiations of the model for two canonical NLP tasks: part-of-speech induction and bitext word alignment, and show that training our model can be substantially more efficient than comparable feature-rich baselines.


  Click for Model/Code and Paper