Models, code, and papers for "Omar S":
In this paper, Artificial Bee Colony (ABC) algorithm which inspired from the behavior of honey bees swarm is presented. ABC is a stochastic population-based evolutionary algorithm for problem solving. ABC algorithm, which is considered one of the most recently swarm intelligent techniques, is proposed to optimize least square support vector machine (LSSVM) to predict the daily stock prices. The proposed model is based on the study of stocks historical data, technical indicators and optimizing LSSVM with ABC algorithm. ABC selects best free parameters combination for LSSVM to avoid over-fitting and local minima problems and improve prediction accuracy. LSSVM optimized by Particle swarm optimization (PSO) algorithm, LSSVM, and ANN techniques are used for comparison with proposed model. Proposed model tested with twenty datasets representing different sectors in S&P 500 stock market. Results presented in this paper show that the proposed model has fast convergence speed, and it also achieves better accuracy than compared techniques in most cases.
An important aspect for an improved cardiac functional analysis is the accurate segmentation of the left ventricle (LV). A novel approach for fully-automated segmentation of the LV endocardium and epicardium contours is presented. This is mainly based on the natural physical characteristics of the LV shape structure. Both sides of the LV boundaries exhibit natural elliptical curvatures by having details on various scales, i.e. exhibiting fractal-like characteristics. The fractional Brownian motion (fBm), which is a non-stationary stochastic process, integrates well with the stochastic nature of ultrasound echoes. It has the advantage of representing a wide range of non-stationary signals and can quantify statistical local self-similarity throughout the time-sequence ultrasound images. The locally characterized boundaries of the fBm segmented LV were further iteratively refined using global information by means of second-order moments. The method is benchmarked using synthetic 3D+time echocardiographic sequences for normal and different ischemic cardiomyopathy, and results compared with state-of-the-art LV segmentation. Furthermore, the framework was validated against real data from canine cases with expert-defined segmentations and demonstrated improved accuracy. The fBm-based segmentation algorithm is fully automatic and has the potential to be used clinically together with 3D echocardiography for improved cardiovascular disease diagnosis.
Effective ultrasound tissue characterization is usually hindered by complex tissue structures. The interlacing of speckle patterns complicates the correct estimation of backscatter distribution parameters. Nakagami parametric imaging based on localized shape parameter mapping can model different backscattering conditions. However, performance of the constructed Nakagami image depends on the sensitivity of the estimation method to the backscattered statistics and scale of analysis. Using a fixed focal region of interest in estimating the Nakagami parametric image would increase estimation variance. In this work, localized Nakagami parameters are estimated adaptively by means of maximum likelihood estimation on a multiscale basis. The varying size kernel integrates the goodness-of-fit of the backscattering distribution parameters at multiple scales for more stable parameter estimation. Results show improved quantitative visualization of changes in tissue specular reflections, suggesting a potential approach for improving tumor localization in low contrast ultrasound images.
Meningioma brain tumour discrimination is challenging as many histological patterns are mixed between the different subtypes. In clinical practice, dominant patterns are investigated for signs of specific meningioma pathology; however the simple observation could result in inter- and intra-observer variation due to the complexity of the histopathological patterns. Also employing a computerised feature extraction approach applied at a single resolution scale might not suffice in accurately delineating the mixture of histopathological patterns. In this work we propose a novel multiresolution feature extraction approach for characterising the textural properties of the different pathological patterns (i.e. mainly cell nuclei shape, orientation and spatial arrangement within the cytoplasm). The pattern textural properties are characterised at various scales and orientations for an improved separability between the different extracted features. The Gabor filter energy output of each magnitude response was combined with four other fixed-resolution texture signatures (2 model-based and 2 statistical-based) with and without cell nuclei segmentation. The highest classification accuracy of 95% was reported when combining the Gabor filters energy and the meningioma subimage fractal signature as a feature vector without performing any prior cell nuceli segmentation. This indicates that characterising the cell-nuclei self-similarity properties via Gabor filters can assists in achieving an improved meningioma subtype classification, which can assist in overcoming variations in reported diagnosis.
This paper aims to compare between four different types of feature extraction approaches in terms of texture segmentation. The feature extraction methods that were used for segmentation are Gabor filters (GF), Gaussian Markov random fields (GMRF), run-length matrix (RLM) and co-occurrence matrix (GLCM). It was shown that the GF performed best in terms of quality of segmentation while the GLCM localises the texture boundaries better as compared to the other methods.
With the heterogeneous nature of tissue texture, using a single resolution approach for optimum classification might not suffice. In contrast, a multiresolution wavelet packet analysis can decompose the input signal into a set of frequency subbands giving the opportunity to characterise the texture at the appropriate frequency channel. An adaptive best bases algorithm for optimal bases selection for meningioma histopathological images is proposed, via applying the fractal dimension (FD) as the bases selection criterion in a tree-structured manner. Thereby, the most significant subband that better identifies texture discontinuities will only be chosen for further decomposition, and its fractal signature would represent the extracted feature vector for classification. The best basis selection using the FD outperformed the energy based selection approaches, achieving an overall classification accuracy of 91.25% as compared to 83.44% and 73.75% for the co-occurrence matrix and energy texture signatures; respectively.
Tissue texture is known to exhibit a heterogeneous or non-stationary nature, therefore using a single resolution approach for optimum classification might not suffice. A clinical decision support system that exploits the subband textural fractal characteristics for best bases selection of meningioma brain histopathological image classification is proposed. Each subband is analysed using its fractal dimension instead of energy, which has the advantage of being less sensitive to image intensity and abrupt changes in tissue texture. The most significant subband that best identifies texture discontinuities will be chosen for further decomposition, and its fractal characteristics would represent the optimal feature vector for classification. The performance was tested using the support vector machine (SVM), Bayesian and k-nearest neighbour (kNN) classifiers and a leave-one-patient-out method was employed for validation. Our method outperformed the classical energy based selection approaches, achieving for SVM, Bayesian and kNN classifiers an overall classification accuracy of 94.12%, 92.50% and 79.70%, as compared to 86.31%, 83.19% and 51.63% for the co-occurrence matrix, and 76.01%, 73.50% and 50.69% for the energy texture signatures, respectively. These results indicate the potential usefulness as a decision support system that could complement radiologists diagnostic capability to discriminate higher order statistical textural information, for which it would be otherwise difficult via ordinary human vision.
Providing an improved technique which can assist pathologists in correctly classifying meningioma tumours with a significant accuracy is our main objective. The proposed technique, which is based on optimum texture measure combination, inspects the separability of the RGB colour channels and selects the channel which best segments the cell nuclei of the histopathological images. The morphological gradient was applied to extract the region of interest for each subtype and for elimination of possible noise (e.g. cracks) which might occur during biopsy preparation. Meningioma texture features are extracted by four different texture measures (two model-based and two statistical-based) and then corresponding features are fused together in different combinations after excluding highly correlated features, and a Bayesian classifier was used for meningioma subtype discrimination. The combined Gaussian Markov random field and run-length matrix texture measures outperformed all other combinations in terms of quantitatively characterising the meningioma tissue, achieving an overall classification accuracy of 92.50%, improving from 83.75% which is the best accuracy achieved if the texture measures are used individually.
Diabetes Mellitus is a major health problem all over the world. Many classification algorithms have been applied for its diagnoses and treatment. In this paper, a hybrid algorithm of Modified-Particle Swarm Optimization and Least Squares- Support Vector Machine is proposed for the classification of type II DM patients. LS-SVM algorithm is used for classification by finding optimal hyper-plane which separates various classes. Since LS-SVM is so sensitive to the changes of its parameter values, Modified-PSO algorithm is used as an optimization technique for LS-SVM parameters. This will Guarantee the robustness of the hybrid algorithm by searching for the optimal values for LS-SVM parameters. The pro-posed Algorithm is implemented and evaluated using Pima Indians Diabetes Data set from UCI repository of machine learning databases. It is also compared with different classifier algorithms which were applied on the same database. The experimental results showed the superiority of the proposed algorithm which could achieve an average classification accuracy of 97.833%.
Network intrusion detection systems (NIDSs) have a role of identifying malicious activities by monitoring the behavior of networks. Due to the currently high volume of networks trafic in addition to the increased number of attacks and their dynamic properties, NIDSs have the challenge of improving their classification performance. Bio-Inspired Optimization Algorithms (BIOs) are used to automatically extract the the discrimination rules of normal or abnormal behavior to improve the classification accuracy and the detection ability of NIDS. A quantum vaccined immune clonal algorithm with the estimation of distribution algorithm (QVICA-with EDA) is proposed in this paper to build a new NIDS. The proposed algorithm is used as classification algorithm of the new NIDS where it is trained and tested using the KDD data set. Also, the new NIDS is compared with another detection system based on particle swarm optimization (PSO). Results shows the ability of the proposed algorithm of achieving high intrusions classification accuracy where the highest obtained accuracy is 94.8 %.
Stock market prediction is the act of trying to determine the future value of a company stock or other financial instrument traded on a financial exchange.
Currently, the most common motion representation for action recognition is optical flow. Optical flow is based on particle tracking which adheres to a Lagrangian perspective on dynamics. In contrast to the Lagrangian perspective, the Eulerian model of dynamics does not track, but describes local changes. For video, an Eulerian phase-based motion representation, using complex steerable filters, has been successfully employed recently for motion magnification and video frame interpolation. Inspired by these previous works, here, we proposes learning Eulerian motion representations in a deep architecture for action recognition. We learn filters in the complex domain in an end-to-end manner. We design these complex filters to resemble complex Gabor filters, typically employed for phase-information extraction. We propose a phase-information extraction module, based on these complex filters, that can be used in any network architecture for extracting Eulerian representations. We experimentally analyze the added value of Eulerian motion representations, as extracted by our proposed phase extraction module, and compare with existing motion representations based on optical flow, on the UCF101 dataset.
We present an algorithm to estimate depth in dynamic video scenes. We propose to learn and infer depth in videos from appearance, motion, occlusion boundaries, and geometric context of the scene. Using our method, depth can be estimated from unconstrained videos with no requirement of camera pose estimation, and with significant background/foreground motions. We start by decomposing a video into spatio-temporal regions. For each spatio-temporal region, we learn the relationship of depth to visual appearance, motion, and geometric classes. Then we infer the depth information of new scenes using piecewise planar parametrization estimated within a Markov random field (MRF) framework by combining appearance to depth learned mappings and occlusion boundary guided smoothness constraints. Subsequently, we perform temporal smoothing to obtain temporally consistent depth maps. To evaluate our depth estimation algorithm, we provide a novel dataset with ground truth depth for outdoor video scenes. We present a thorough evaluation of our algorithm on our new dataset and the publicly available Make3d static image dataset.
Assessing tumor tissue heterogeneity via ultrasound has recently been suggested for predicting early response to treatment. The ultrasound backscattering characteristics can assist in better understanding the tumor texture by highlighting local concentration and spatial arrangement of tissue scatterers. However, it is challenging to quantify the various tissue heterogeneities ranging from fine-to-coarse of the echo envelope peaks in tumor texture. Local parametric fractal features extracted via maximum likelihood estimation from five well-known statistical model families are evaluated for the purpose of ultrasound tissue characterization. The fractal dimension (self-similarity measure) was used to characterize the spatial distribution of scatterers, while the Lacunarity (sparsity measure) was applied to determine scatterer number density. Performance was assessed based on 608 cross-sectional clinical ultrasound RF images of liver tumors (230 and 378 demonstrating respondent and non-respondent cases, respectively). Crossvalidation via leave-one-tumor-out and with different k-folds methodologies using a Bayesian classifier were employed for validation. The fractal properties of the backscattered echoes based on the Nakagami model (Nkg) and its extend four-parameter Nakagami-generalized inverse Gaussian (NIG) distribution achieved best results - with nearly similar performance - for characterizing liver tumor tissue. Accuracy, sensitivity and specificity for the Nkg/NIG were: 85.6%/86.3%, 94.0%/96.0%, and 73.0%/71.0%, respectively. Other statistical models, such as the Rician, Rayleigh, and K-distribution were found to not be as effective in characterizing the subtle changes in tissue texture as an indication of response to treatment. Employing the most relevant and practical statistical model could have potential consequences for the design of an early and effective clinical therapy.
Breast cancer is the most common cancer and is the leading cause of cancer death among women worldwide. Detection of breast cancer, while it is still small and confined to the breast, provides the best chance of effective treatment. Computer Aided Detection (CAD) systems that detect cancer from mammograms will help in reducing the human errors that lead to missing breast carcinoma. Literature is rich of scientific papers for methods of CAD design, yet with no complete system architecture to deploy those methods. On the other hand, commercial CADs are developed and deployed only to vendors' mammography machines with no availability to public access. This paper presents a complete CAD; it is complete since it combines, on a hand, the rigor of algorithm design and assessment (method), and, on the other hand, the implementation and deployment of a system architecture for public accessibility (system). (1) We develop a novel algorithm for image enhancement so that mammograms acquired from any digital mammography machine look qualitatively of the same clarity to radiologists' inspection; and is quantitatively standardized for the detection algorithms. (2) We develop novel algorithms for masses and microcalcifications detection with accuracy superior to both literature results and the majority of approved commercial systems. (3) We design, implement, and deploy a system architecture that is computationally effective to allow for deploying these algorithms to cloud for public access.
This report describes eighteen projects that explored how commercial cloud computing services can be utilized for scientific computation at national laboratories. These demonstrations ranged from deploying proprietary software in a cloud environment to leveraging established cloud-based analytics workflows for processing scientific datasets. By and large, the projects were successful and collectively they suggest that cloud computing can be a valuable computational resource for scientific computation at national laboratories.
Getting deep convolutional neural networks to perform well requires a large amount of training data. When the available labelled data is small, it is often beneficial to use transfer learning to leverage a related larger dataset (source) in order to improve the performance on the small dataset (target). Among the transfer learning approaches, domain adaptation methods assume that distributions between the two domains are shifted and attempt to realign them. In this paper, we consider the domain adaptation problem from the perspective of dimensionality reduction and propose a generic framework based on graph embedding. Instead of solving the generalised eigenvalue problem, we formulate the graph-preserving criterion as a loss in the neural network and learn a domain-invariant feature transformation in an end-to-end fashion. We show that the proposed approach leads to a powerful Domain Adaptation framework; a simple LDA-inspired instantiation of the framework leads to state-of-the-art performance on two of the most widely used Domain Adaptation benchmarks, Office31 and MNIST to USPS datasets.
This paper explores sequential modeling of polyphonic music with deep neural networks. While recent breakthroughs have focussed on network architecture, we demonstrate that the representation of the sequence can make an equally significant contribution to the performance of the model as measured by validation set loss. By extracting salient features inherent to the dataset, the model can either be conditioned on these features or trained to predict said features as extra components of the sequences being modeled. We show that training a neural network to predict a seemingly more complex sequence, with extra features included in the series being modeled, can improve overall model performance significantly. We first introduce TonicNet, a GRU-based model trained to initially predict the chord at a given time-step before then predicting the notes of each voice at that time-step, in contrast with the typical approach of predicting only the notes. We then evaluate TonicNet on the canonical JSB Chorales dataset and obtain state-of-the-art results.
For more than half a century, moments have attracted lot ot interest in the pattern recognition community.The moments of a distribution (an object) provide several of its characteristics as center of gravity, orientation, disparity, volume. Moments can be used to define invariant characteristics to some transformations that an object can undergo, commonly called moment invariants. This work provides a simple and systematic formalism to compute geometric moment invariants in n-dimensional space.
There are concerns that neural language models may preserve some of the stereotypes of the underlying societies that generate the large corpora needed to train these models. For example, gender bias is a significant problem when generating text, and its unintended memorization could impact the user experience of many applications (e.g., the smart-compose feature in Gmail). In this paper, we introduce a novel architecture that decouples the representation learning of a neural model from its memory management role. This architecture allows us to update a memory module with an equal ratio across gender types addressing biased correlations directly in the latent space. We experimentally show that our approach can mitigate the gender bias amplification in the automatic generation of articles news while providing similar perplexity values when extending the Sequence2Sequence architecture.