Models, code, and papers for "Asifullah Khan":

Wind Speed Prediction using Deep Ensemble Learning with a Jet-like Architecture

Feb 28, 2020
Aqsa Saeed Qureshi, Asifullah Khan

Accurate and reliable prediction of wind speed is a challenging task, because it depends on meteorological features of the surrounding region. In this work a novel Deep Ensemble Learning using Jet-like Architecture (DEL-Jet) approach is proposed. The proposed (DEL-Jet) technique is tested on wind speed prediction problem. As wind speed data is of the time series nature, so two Convolutional Neural Networks (CNNs) in addition to a deep Auto-Encoder (AE) are used to extract the feature space from input data. Whereas, Non-linear Principal Component Analysis (NLPCA) is employed to further reduce the dimensionality of extracted feature space. Finally, reduced feature space along with original feature space are used to train the meta-regressor for forecasting final wind speed. To show the effectiveness of work, performance of the proposed DEL-Jet technique is evaluated for ten independent runs and compared against commonly used regressors.

* Pages: 14, Tables: 6, Figures: 3 

  Access Model/Code and Paper
Adaptive Transfer Learning in Deep Neural Networks: Wind Power Prediction using Knowledge Transfer from Region to Region and Between Different Task Domains

Oct 30, 2018
Aqsa Saeed Qureshi, Asifullah Khan

Transfer Learning (TL) in Deep Neural Networks is gaining importance because in most of the cases, the labeling of data is costly and time-consuming. Additionally, TL provides effective weight initialization. This paper introduces the idea of Adaptive Transfer Learning in Deep Neural Networks for wind power prediction. Adaptive TL of Deep Neural Networks is proposed, which makes the proposed system an adaptive one as regards training on a different wind farm is concerned. The proposed technique is tested for short-term wind power predictions, where continuously arriving information has to be exploited. Adaptive TL not only helps in providing good weight initialization, but is also helpful to utilize the online data that is continuously being generated by wind farms. Additionally, the proposed technique is shown to transfer knowledge between different task domains (wind power to wind speed prediction) and from one region to another region. The simulation results show that proposed technique achieves average values of 0.0637,0.0986, and 0.0984 for the Mean-Absolute-Error, Root-Mean-Squared-Error, and Standard-Deviation-Error, respectively.

* 27 pages, 21 figures, and 11 tables 

  Access Model/Code and Paper
A New Channel Boosted Convolutional Neural Network using Transfer Learning

May 20, 2018
Asifullah Khan, Anabia Sohail, Amna Ali

We present a novel architectural enhancement of Channel Boosting in deep convolutional neural network (CNN). This idea of Channel Boosting exploits both the channel dimension of CNN (learning from multiple channels) and Transfer learning (TL). TL is utilized at two different stages, channel generation and channel exploitation. A deep CNN is boosted by various channels available through TL from already trained Deep NN, in addition to its own original channel. The deep architecture of CNN then exploits the original and boosted channels down the stream for learning discriminative patterns. Churn prediction in telecom is a challenging task due to high dimensionality and imbalanced nature of the data and it is therefore used to evaluate the performance of the proposed Channel Boosted CNN (CB-CNN). In the first phase, discriminative informative features are being extracted using a staked autoencoder, and then in the second phase, these features are combined with the original features to form Channel Boosted images. Finally, a pre-trained CNN is exploited by employing TL to perform classification. The results are promising and show the ability of the Channel Boosting concept in learning complex classification problem by discerning even minute differences in churners and non-churners. The proposed work validates the concept observed from the evolution of recent CNN architectures that the innovative restructuring may increase the representative capacity of the network.

* 21 Pages, 5 Figures, 1 Table 

  Access Model/Code and Paper
Deep Belief Networks Based Feature Generation and Regression for Predicting Wind Power

Jul 31, 2018
Asifullah Khan, Aneela Zameer, Tauseef Jamal, Ahmad Raza

Wind energy forecasting helps to manage power production, and hence, reduces energy cost. Deep Neural Networks (DNN) mimics hierarchical learning in the human brain and thus possesses hierarchical, distributed, and multi-task learning capabilities. Based on aforementioned characteristics, we report Deep Belief Network (DBN) based forecast engine for wind power prediction because of its good generalization and unsupervised pre-training attributes. The proposed DBN-WP forecast engine, which exhibits stochastic feature generation capabilities and is composed of multiple Restricted Boltzmann Machines, generates suitable features for wind power prediction using atmospheric properties as input. DBN-WP, due to its unsupervised pre-training of RBM layers and generalization capabilities, is able to learn the fluctuations in the meteorological properties and thus is able to perform effective mapping of the wind power. In the deep network, a regression layer is appended at the end to predict sort-term wind power. It is experimentally shown that the deep learning and unsupervised pre-training capabilities of DBN based model has comparable and in some cases better results than hybrid and complex learning techniques proposed for wind power prediction. The proposed prediction system based on DBN, achieves mean values of RMSE, MAE and SDE as 0.124, 0.083 and 0.122, respectively. Statistical analysis of several independent executions of the proposed DBN-WP wind power prediction system demonstrates the stability of the system. The proposed DBN-WP architecture is easy to implement and offers generalization as regards the change in location of the wind farm is concerned.

* Pages:31 Figure:11 Table:5 

  Access Model/Code and Paper
A Survey of the Recent Architectures of Deep Convolutional Neural Networks

Jan 17, 2019
Asifullah Khan, Anabia Sohail, Umme Zahoora, Aqsa Saeed Qureshi

Deep Convolutional Neural Networks (CNNs) are a special type of Neural Networks, which have shown state-of-the-art results on various competitive benchmarks. The powerful learning ability of deep CNN is largely achieved with the use of multiple non-linear feature extraction stages that can automatically learn hierarchical representation from the data. Availability of a large amount of data and improvements in the hardware processing units have accelerated the research in CNNs and recently very interesting deep CNN architectures are reported. The recent race in deep CNN architectures for achieving high performance on the challenging benchmarks has shown that the innovative architectural ideas, as well as parameter optimization, can improve the CNN performance on various vision-related tasks. In this regard, different ideas in the CNN design have been explored such as use of different activation and loss functions, parameter optimization, regularization, and restructuring of processing units. However, the major improvement in representational capacity is achieved by the restructuring of the processing units. Especially, the idea of using a block as a structural unit instead of a layer is gaining substantial appreciation. This survey thus focuses on the intrinsic taxonomy present in the recently reported CNN architectures and consequently, classifies the recent innovations in CNN architectures into seven different categories. These seven categories are based on spatial exploitation, depth, multi-path, width, feature map exploitation, channel boosting and attention. Additionally, it covers the elementary understanding of the CNN components and sheds light on the current challenges and applications of CNNs.

* Number of Pages: 60 Number of Figures: 11 Number of Tables:1 

  Access Model/Code and Paper
Deep Object Detection based Mitosis Analysis in Breast Cancer Histopathological Images

Mar 17, 2020
Anabia Sohail, Muhammad Ahsan Mukhtar, Asifullah Khan, Muhammad Mohsin Zafar, Aneela Zameer, Saranjam Khan

Empirical evaluation of breast tissue biopsies for mitotic nuclei detection is considered an important prognostic biomarker in tumor grading and cancer progression. However, automated mitotic nuclei detection poses several challenges because of the unavailability of pixel-level annotations, different morphological configurations of mitotic nuclei, their sparse representation, and close resemblance with non-mitotic nuclei. These challenges undermine the precision of the automated detection model and thus make detection difficult in a single phase. This work proposes an end-to-end detection system for mitotic nuclei identification in breast cancer histopathological images. Deep object detection-based Mask R-CNN is adapted for mitotic nuclei detection that initially selects the candidate mitotic region with maximum recall. However, in the second phase, these candidate regions are refined by multi-object loss function to improve the precision. The performance of the proposed detection model shows improved discrimination ability (F-score of 0.86) for mitotic nuclei with significant precision (0.86) as compared to the two-stage detection models (F-score of 0.701) on TUPAC16 dataset. Promising results suggest that the deep object detection-based model has the potential to learn the characteristic features of mitotic nuclei from weakly annotated data and suggests that it can be adapted for the identification of other nuclear bodies in histopathological images.

* Tables: 4, Figures 11, Pages: 21 

  Access Model/Code and Paper
Malware Classification using Deep Learning based Feature Extraction and Wrapper based Feature Selection Technique

Nov 26, 2019
Muhammad Furqan Rafique, Muhammad Ali, Aqsa Saeed Qureshi, Asifullah Khan, Anwar Majid Mirza

In case of behavior analysis of a malware, categorization of malicious files is an essential part after malware detection. Numerous static and dynamic techniques have been reported so far for categorizing malwares. This research work presents a deep learning based malware detection (DLMD) technique based on static methods for classifying different malware families. The proposed DLMD technique uses both the byte and ASM files for feature engineering and thus classifying malwares families. First, features are extracted from byte files using two different types of Deep Convolutional Neural Networks (CNN). After that, important and discriminative opcode features are selected using a wrapper-based mechanism, where Support Vector Machine (SVM) is used as a classifier. The idea is to construct a hybrid feature space by combining the different feature spaces in order that the shortcoming of a particular feature space may be overcome by another feature space. And consequently to reduce the chances of missing a malware. Finally, the hybrid feature space is then used to train a Multilayer Perceptron, which classifies all the nine different malware families. Experimental results show that proposed DLMD technique achieves log-loss of 0.09 for ten independent runs. Moreover, the performance of the proposed DLMD technique is compared against different classifiers and shows its effectiveness in categorizing malwares. The relevant code and database can be found at https://github.com/cyberhunters/Malware-Detection-Using-Machine-Learning.

* 20 pages,9 figures, 11 tables 

  Access Model/Code and Paper
A Recent Survey on the Applications of Genetic Programming in Image Processing

Jan 18, 2019
Asifullah Khan, Aqsa Saeed Qureshi, Noor ul Wahab, Mutawara Hussain, Muhammad Yousaf Hamza

During the last two decades, Genetic Programming (GP) has been largely used to tackle optimization, classification, and automatic features selection related tasks. The widespread use of GP is mainly due to its flexible and comprehensible tree-type structure. Similarly, research is also gaining momentum in the field of Image Processing (IP) because of its promising results over wide areas of applications ranging from medical IP to multispectral imaging. IP is mainly involved in applications such as computer vision, pattern recognition, image compression, storage and transmission, and medical diagnostics. This prevailing nature of images and their associated algorithm i.e complexities gave an impetus to the exploration of GP. GP has thus been used in different ways for IP since its inception. Many interesting GP techniques have been developed and employed in the field of IP. To give the research community an extensive view of these techniques, this paper presents the diverse applications of GP in IP and provides useful resources for further research. Also, comparison of different parameters used in ten different applications of IP are summarized in tabular form. Moreover, analysis of different parameters used in IP related tasks is carried-out to save the time needed in future for evaluating the parameters of GP. As more advancement is made in GP methodologies, its success in solving complex tasks not only related to IP but also in other fields will increase. Additionally, guidelines are provided for applying GP in IP related tasks, pros and cons of GP techniques are discussed, and some future directions are also set.

* 29 pages, 12 figures, and 1 table 

  Access Model/Code and Paper
Transfer Learning and Meta Classification Based Deep Churn Prediction System for Telecom Industry

Jan 18, 2019
Uzair Ahmed, Asifullah Khan, Saddam Hussain Khan, Abdul Basit, Irfan Ul Haq, Yeon Soo Lee

A churn prediction system guides telecom service providers to reduce revenue loss. Development of a churn prediction system for a telecom industry is a challenging task, mainly due to size of the data, high dimensional features, and imbalanced distribution of the data. In this paper, we focus on a novel solution to the inherent problems of churn prediction, using the concept of Transfer Learning (TL) and Ensemble-based Meta-Classification. The proposed method TL-DeepE is applied in two stages. The first stage employs TL by fine tuning multiple pre-trained Deep Convolution Neural Networks (CNNs). Telecom datasets are in vector form, which is converted into 2D images because Deep CNNs have high learning capacity on images. In the second stage, predictions from these Deep CNNs are appended to the original feature vector and thus are used to build a final feature vector for the high-level Genetic Programming and AdaBoost based ensemble classifier. Thus, the experiments are conducted using various CNNs as base classifiers with the contribution of high-level GP-AdaBoost ensemble classifier, and the results achieved are as an average of the outcomes. By using 10-fold cross-validation, the performance of the proposed TL-DeepE system is compared with existing techniques, for two standard telecommunication datasets; Orange and Cell2cell. In experimental result, the prediction accuracy for Orange and Cell2cell datasets were as 75.4% and 68.2% and a score of the area under the curve as 0.83 and 0.74, respectively.

* Number of Pages: 9 Number of Figures:4 Number of Tables: 4 

  Access Model/Code and Paper