Research papers and code for "Ahmed Mohammed":
Arabic word segmentation is essential for a variety of NLP applications such as machine translation and information retrieval. Segmentation entails breaking words into their constituent stems, affixes and clitics. In this paper, we compare two approaches for segmenting four major Arabic dialects using only several thousand training examples for each dialect. The two approaches involve posing the problem as a ranking problem, where an SVM ranker picks the best segmentation, and as a sequence labeling problem, where a bi-LSTM RNN coupled with CRF determines where best to segment words. We are able to achieve solid segmentation results for all dialects using rather limited training data. We also show that employing Modern Standard Arabic data for domain adaptation and assuming context independence improve overall results.

Click to Read Paper and Get Code
Unmanned Aerial Vehicles (UAVs) have recently rapidly grown to facilitate a wide range of innovative applications that can fundamentally change the way cyber-physical systems (CPSs) are designed. CPSs are a modern generation of systems with synergic cooperation between computational and physical potentials that can interact with humans through several new mechanisms. The main advantages of using UAVs in CPS application is their exceptional features, including their mobility, dynamism, effortless deployment, adaptive altitude, agility, adjustability, and effective appraisal of real-world functions anytime and anywhere. Furthermore, from the technology perspective, UAVs are predicted to be a vital element of the development of advanced CPSs. Therefore, in this survey, we aim to pinpoint the most fundamental and important design challenges of multi-UAV systems for CPS applications. We highlight key and versatile aspects that span the coverage and tracking of targets and infrastructure objects, energy-efficient navigation, and image analysis using machine learning for fine-grained CPS applications. Key prototypes and testbeds are also investigated to show how these practical technologies can facilitate CPS applications. We present and propose state-of-the-art algorithms to address design challenges with both quantitative and qualitative methods and map these challenges with important CPS applications to draw insightful conclusions on the challenges of each application. Finally, we summarize potential new directions and ideas that could shape future research in these areas.

Click to Read Paper and Get Code
Deeper convolutional neural networks provide more capacity to approximate complex mapping functions. However, increasing network depth imposes difficulties on training and increases model complexity. This paper presents a new nonlinear computational layer of considerably high capacity to the deep convolutional neural network architectures. This layer performs a set of comprehensive convolution operations that mimics the overall function of the human visual system (HVS) via focusing on learning structural information in its input. The core of its computations is evaluating the components of the structural similarity metric (SSIM) in a setting that allows the kernels to learn to match structural information. The proposed SSIMLayer is inherently nonlinear and hence, it does not require subsequent nonlinear transformations. Experiments conducted on CIFAR-10 benchmark demonstrates that the SSIMLayer provides better convergence than the traditional convolutional layer, bypasses the need for nonlinear transformations and shows more robustness against noise perturbations and adversarial attacks.

* 11 pages, 10 figures
Click to Read Paper and Get Code
With the fast growth of the Internet, more and more information is available on the Web. The Semantic Web has many features which cannot be handled by using the traditional search engines. It extracts metadata for each discovered Web documents in RDF or OWL formats, and computes relations between documents. We proposed a hybrid indexing and ranking technique for the Semantic Web which finds relevant documents and computes the similarity among a set of documents. First, it returns with the most related document from the repository of Semantic Web Documents (SWDs) by using a modified version of the ObjectRank technique. Then, it creates a sub-graph for the most related SWDs. Finally, It returns the hubs and authorities of these document by using the HITS algorithm. Our technique increases the quality of the results and decreases the execution time of processing the user's query.

* IJCSI International Journal of Computer Science Issues, Vol. 8, Issue 5, No 3, 2011, 118-125
* 8 pages, 7 figures
Click to Read Paper and Get Code
The model presented in this research predicts ideal chiral crystal and propose a new direction of designing chiral crystals. Skyrmions are topologically protected and structurally assymetric materials with an exotic spin composition. This work presents deep learning method for skyrmion material design of chiral crystals. This paper presents an approach to construct a probabilistic classifier and an Artificial Neural Network(ANN) from a true or false chirality dataset consisting of chiral and achiral compounds with 'A' and 'B' type elements. A quantitative predictor for accuracy of forming the chiral crystals is illustrated. The feasibility of ANN method is tested in a comprehensive manner by comparing with probalistic classifier method. Throughout this manuscript we present deep learnig algorithm design with modelling and simulations of materials. This research work elucidated paves a way to develop sophisticated software tool to make an indicator of crystal design.

* 8 Pages, 5 figures
Click to Read Paper and Get Code
In this paper we present an efficient computer aided mass classification method in digitized mammograms using Artificial Neural Network (ANN), which performs benign-malignant classification on region of interest (ROI) that contains mass. One of the major mammographic characteristics for mass classification is texture. ANN exploits this important factor to classify the mass into benign or malignant. The statistical textural features used in characterizing the masses are mean, standard deviation, entropy, skewness, kurtosis and uniformity. The main aim of the method is to increase the effectiveness and efficiency of the classification process in an objective manner to reduce the numbers of false-positive of malignancies. Three layers artificial neural network (ANN) with seven features was proposed for classifying the marked regions into benign and malignant and 90.91% sensitivity and 83.87% specificity is achieved that is very much promising compare to the radiologist's sensitivity 75%.

* International Journal of Artificial Intelligence & Applications 1.3 (2010) 1-13
* 13 pages, 10 figures
Click to Read Paper and Get Code
Diacritization process attempt to restore the short vowels in Arabic written text; which typically are omitted. This process is essential for applications such as Text-to-Speech (TTS). While diacritization of Modern Standard Arabic (MSA) still holds the lion share, research on dialectal Arabic (DA) diacritization is very limited. In this paper, we present our contribution and results on the automatic diacritization of two sub-dialects of Maghrebi Arabic, namely Tunisian and Moroccan, using a character-level deep neural network architecture that stacks two bi-LSTM layers over a CRF output layer. The model achieves word error rate of 2.7% and 3.6% for Moroccan and Tunisian respectively and is capable of implicitly identifying the sub-dialect of the input.

* 6 pages, 3 figures
Click to Read Paper and Get Code
Colorectal polyps are important precursors to colon cancer, the third most common cause of cancer mortality for both men and women. It is a disease where early detection is of crucial importance. Colonoscopy is commonly used for early detection of cancer and precancerous pathology. It is a demanding procedure requiring significant amount of time from specialized physicians and nurses, in addition to a significant miss-rates of polyps by specialists. Automated polyp detection in colonoscopy videos has been demonstrated to be a promising way to handle this problem. {However, polyps detection is a challenging problem due to the availability of limited amount of training data and large appearance variations of polyps. To handle this problem, we propose a novel deep learning method Y-Net that consists of two encoder networks with a decoder network. Our proposed Y-Net method} relies on efficient use of pre-trained and un-trained models with novel sum-skip-concatenation operations. Each of the encoders are trained with encoder specific learning rate along the decoder. Compared with the previous methods employing hand-crafted features or 2-D/3-D convolutional neural network, our approach outperforms state-of-the-art methods for polyp detection with 7.3% F1-score and 13% recall improvement.

* 11 Pages, 3 figures
Click to Read Paper and Get Code
There is a need for new metaphors from immunology to flourish the application areas of Artificial Immune Systems. A metaheuristic called Obesity Heuristic derived from advances in obesity treatment is proposed. The main forces of the algorithm are the generation omega-6 and omega-3 fatty acids. The algorithm works with Just-In-Time philosophy; by starting only when desired. A case study of data cleaning is provided. With experiments conducted on standard tables, results show that Obesity Heuristic outperforms other algorithms, with 100% recall. This is a great improvement over other algorithms

Click to Read Paper and Get Code
Most optimization problems in real life applications are often highly nonlinear. Local optimization algorithms do not give the desired performance. So, only global optimization algorithms should be used to obtain optimal solutions. This paper introduces a new nature-inspired metaheuristic optimization algorithm, called Hoopoe Heuristic (HH). In this paper, we will study HH and validate it against some test functions. Investigations show that it is very promising and could be seen as an optimization of the powerful algorithm of cuckoo search. Finally, we discuss the features of Hoopoe Heuristic and propose topics for further studies.

Click to Read Paper and Get Code
There are many new forms of interfacing human users to machines. We persevere here electric mechanical form of interaction between human and machine. The emergence of brain-computer interface allows mind-to-movement systems. The story of the Pied Piper inspired us to devise some new heuristics for interfacing human motor system using brain waves by combining head helmet and LumbarMotionMonitor For the simulation we use java GridGain Brain responses of classified subjects during training indicates that Probe can be the best stimulus to rely on in distinguishing between knowledgeable and not knowledgeable

Click to Read Paper and Get Code
This paper present a novel off-line signature recognition method based on multi scale Fourier Descriptor and wavelet transform . The main steps of constructing a signature recognition system are discussed and experiments on real data sets show that the average error rate can reach 1%. Finally we compare 8 distance measures between feature vectors with respect to the recognition performance. Key words: signature recognition; Fourier Descriptor; Wavelet transform; personal verification

* IJCSIS, Vol. 7 No. 3, March 2010,
* IEEE Publication format, ISSN 1947 5500, http://sites.google.com/site/ijcsis/
Click to Read Paper and Get Code
Point cloud data from 3D LiDAR sensors are one of the most crucial sensor modalities for versatile safety-critical applications such as self-driving vehicles. Since the annotations of point cloud data is an expensive and time-consuming process, therefore recently the utilisation of simulated environments and 3D LiDAR sensors for this task started to get some popularity. With simulated sensors and environments, the process for obtaining an annotated synthetic point cloud data became much easier. However, the generated synthetic point cloud data are still missing the artefacts usually exist in point cloud data from real 3D LiDAR sensors. As a result, the performance of the trained models on this data for perception tasks when tested on real point cloud data is degraded due to the domain shift between simulated and real environments. Thus, in this work, we are proposing a domain adaptation framework for bridging this gap between synthetic and real point cloud data. Our proposed framework is based on the deep cycle-consistent generative adversarial networks (CycleGAN) architecture. We have evaluated the performance of our proposed framework on the task of vehicle detection from a bird's eye view (BEV) point cloud images coming from real 3D LiDAR sensors. The framework has shown competitive results with an improvement of more than 7% in average precision score over other baseline approaches when tested on real BEV point cloud images.

* Under review for IEEE SMC 2019
Click to Read Paper and Get Code
The performance of classification algorithms with a massive and highly imbalanced data stream depends upon efficient balancing strategy. Some techniques of balancing strategy have been applied in the past with Batch data to resolve the class imbalance problem. This paper proposes a new incremental data balancing framework which can work with massive imbalanced data streams. In this paper, we choose Racing Algorithm as an automated data balancing technique which optimizes the balancing techniques. We applied Random Forest classification algorithm which can deal with the massive data stream. We investigated the suitability of Racing Algorithm and Random Forest in the proposed framework. Applying new technique in the proposed framework on the European Credit Card dataset, provided better results than the Batch mode. The proposed framework is more scalable to handle online massive data streams.

* Paper submitted to IJCNN, under-review
Click to Read Paper and Get Code
Humans approximately spend a third of their life sleeping, which makes monitoring sleep an integral part of well-being. In this paper, a 34-layer deep residual ConvNet architecture for end-to-end sleep staging is proposed. The network takes raw single channel electroencephalogram (Fpz-Cz) signal as input and yields hypnogram annotations for each 30s segments as output. Experiments are carried out for two different scoring standards (5 and 6 stage classification) on the expanded PhysioNet Sleep-EDF dataset, which contains multi-source data from hospital and household polysomnography setups. The performance of the proposed network is compared with that of the state-of-the-art algorithms in patient independent validation tasks. The experimental results demonstrate the superiority of the proposed network compared to the best existing method, providing a relative improvement in epoch-wise average accuracy of 6.8% and 6.3% on the household data and multi-source data, respectively. Codes are made publicly available on Github.

* 5 pages, 3 Figures, Appendix, IEEE BHI 2019
Click to Read Paper and Get Code
Currently, the world is witnessing a mounting avalanche of data due to the increasing number of mobile network subscribers, Internet websites, and online services. This trend is continuing to develop in a quick and diverse manner in the form of big data. Big data analytics can process large amounts of raw data and extract useful, smaller-sized information, which can be used by different parties to make reliable decisions. In this paper, we conduct a survey on the role that big data analytics can play in the design of data communication networks. Integrating the latest advances that employ big data analytics with the networks control/traffic layers might be the best way to build robust data communication networks with refined performance and intelligent features. First, the survey starts with the introduction of the big data basic concepts, framework, and characteristics. Second, we illustrate the main network design cycle employing big data analytics. This cycle represents the umbrella concept that unifies the surveyed topics. Third, there is a detailed review of the current academic and industrial efforts toward network design using big data analytics. Forth, we identify the challenges confronting the utilization of big data analytics in network design. Finally, we highlight several future research directions. To the best of our knowledge, this is the first survey that addresses the use of big data analytics techniques for the design of a broad range of networks.

* 23 pages, 4 figures, 2 tables, Journal paper accepted for publication at Elsevier Computer Networks Journal
Click to Read Paper and Get Code
In the advent of a digital health revolution, vast amounts of clinical data are being generated, stored and processed on a daily basis. This has made the storage and retrieval of large volumes of health-care data, especially, high-resolution medical images, particularly challenging. Effective image compression for medical images thus plays a vital role in today's healthcare information system, particularly in teleradiology. In this work, an X-ray image compression method based on a Convolutional Recurrent Neural Networks RNN-Conv is presented. The proposed architecture can provide variable compression rates during deployment while it requires each network to be trained only once for a specific dimension of X-ray images. The model uses a multi-level pooling scheme that learns contextualized features for effective compression. We perform our image compression experiments on the National Institute of Health (NIH) ChestX-ray8 dataset and compare the performance of the proposed architecture with a state-of-the-art RNN based technique and JPEG 2000. The experimental results depict improved compression performance achieved by the proposed method in terms of Structural Similarity Index (SSIM) and Peak Signal-to-Noise Ratio (PSNR) metrics. To the best of our knowledge, this is the first reported evaluation on using a deep convolutional RNN for medical image compression.

* 4 pages, 2 figures, IEEE BHI 2019
Click to Read Paper and Get Code