Research papers and code for "Alex Pappachen James":
This review provides an overview of the literature on the edge detection methods for pattern recognition that inspire from the understanding of human vision. We note that edge detection is one of the most fundamental process within the low level vision and provides the basis for the higher level visual intelligence in primates. The recognition of the patterns within the images relate closely to the spatiotemporal processes of edge formations, and its implementation needs a crossdisciplanry approach in neuroscience, computing and pattern recognition. In this review, the edge detectors are grouped in as edge features, gradients and sketch models, and some example applications are provided for reference. We note a significant increase in the amount of published research in the last decade that utilizes edge features in a wide range of problems in computer vision and image understanding having a direct implication to pattern recognition with images.

* Int. J. of Applied Pattern recognition, Vol 3, 2015
Click to Read Paper and Get Code
We briefly introduce the memory based approaches to emulate machine intelligence in VLSI hardware, describing the challenges and advantages. Implementation of artificial intelligence techniques in VLSI hardware is a practical and difficult problem. Deep architectures, hierarchical temporal memories and memory networks are some of the contemporary approaches in this area of research. The techniques attempt to emulate low level intelligence tasks and aim at providing scalable solutions to high level intelligence problems such as sparse coding and contextual processing.

Click to Read Paper and Get Code
The memristive crossbar aims to implement analog weighted neural network, however, the realistic implementation of such crossbar arrays is not possible due to limited switching states of memristive devices. In this work, we propose the design of an analog deep neural network with binary weight update through backpropagation algorithm using binary state memristive devices. We show that such networks can be successfully used for image processing task and has the advantage of lower power consumption and small on-chip area in comparison with digital counterparts. The proposed network was benchmarked for MNIST handwritten digits recognition achieving an accuracy of approximately 90%.

* IEEE NANO 2018
Click to Read Paper and Get Code
Probabilistic Neural Network (PNN) is a feed-forward artificial neural network developed for solving classification problems. This paper proposes a hardware implementation of an approximated PNN (APNN) algorithm in which the conventional exponential function of the PNN is replaced with gated threshold logic. The weights of the PNN are approximated using a memristive crossbar architecture. In particular, the proposed algorithm performs normalization of the training weights, and quantization into 16 levels which significantly reduces the complexity of the circuit.

* IEEE NANO 2018
Click to Read Paper and Get Code
Mapping neuro-inspired algorithms to sensor backplanes of on-chip hardware require shifting the signal processing from digital to the analog domain, demanding memory technologies beyond conventional CMOS binary storage units. Using memristors for building analog data storage is one of the promising approaches amongst emerging non-volatile memory technologies. Recently, a memristive multi-level memory (MLM) cell for storing discrete analog values has been developed in which memory system is implemented combining memristors in voltage divider configuration. In given example, the memory cell of 3 sub-cells with a memristor in each was programmed to store ternary bits which overall achieved 10 and 27 discrete voltage levels. However, for further use of proposed memory cell in analog signal processing circuits data encoder is required to generate control voltages for programming memristors to store discrete analog values. In this paper, we present the design and performance analysis of data encoder that generates write pattern signals for 10 level memristive memory.

* Analog Integrated Circuits and Signal Processing, 2018
Click to Read Paper and Get Code
Hierarchical Temporal Memory (HTM) is a neuromorphic algorithm that emulates sparsity, hierarchy and modularity resembling the working principles of neocortex. Feature encoding is an important step to create sparse binary patterns. This sparsity is introduced by the binary weights and random weight assignment in the initialization stage of the HTM. We propose the alternative deterministic method for the HTM initialization stage, which connects the HTM weights to the input data and preserves natural sparsity of the input information. Further, we introduce the hardware implementation of the deterministic approach and compare it to the traditional HTM and existing hardware implementation. We test the proposed approach on the face recognition problem and show that it outperforms the conventional HTM approach.

* Analog Integrated Circuits and Signal Processing, 2018
Click to Read Paper and Get Code
The quality assessment of edges in an image is an important topic as it helps to benchmark the performance of edge detectors, and edge-aware filters that are used in a wide range of image processing tasks. The most popular image quality metrics such as Mean squared error (MSE), Peak signal-to-noise ratio (PSNR) and Structural similarity (SSIM) metrics for assessing and justifying the quality of edges. However, they do not address the structural and functional accuracy of edges in images with a wide range of natural variabilities. In this review, we provide an overview of all the most relevant performance metrics that can be used to benchmark the quality performance of edges in images. We identify four major groups of metrics and also provide a critical insight into the evaluation protocol and governing equations.

* 2017 International Conference on Advances in Computing, Communications and Informatics (ICACCI), Udupi, 2017, pp. 2366-2369
Click to Read Paper and Get Code
Recognition of human emotions from the imaging templates is useful in a wide variety of human-computer interaction and intelligent systems applications. However, the automatic recognition of facial expressions using image template matching techniques suffer from the natural variability with facial features and recording conditions. In spite of the progress achieved in facial emotion recognition in recent years, the effective and computationally simple feature selection and classification technique for emotion recognition is still an open problem. In this paper, we propose an efficient and straightforward facial emotion recognition algorithm to reduce the problem of inter-class pixel mismatch during classification. The proposed method includes the application of pixel normalization to remove intensity offsets followed-up with a Min-Max metric in a nearest neighbor classifier that is capable of suppressing feature outliers. The results indicate an improvement of recognition performance from 92.85% to 98.57% for the proposed Min-Max classification method when tested on JAFFE database. The proposed emotion recognition technique outperforms the existing template matching methods.

* 2017 International Conference on Advances in Computing, Communications and Informatics (ICACCI), Udupi, 2017, pp. 752-758
Click to Read Paper and Get Code
The fusion techniques that utilize multiple feature sets to form new features that are often more robust and contain useful information for future processing are referred to as feature fusion. The term data fusion is applied to the class of techniques used for combining decisions obtained from multiple feature sets to form global decisions. Feature and data fusion interchangeably represent two important classes of techniques that have proved to be of practical importance in a wide range of medical imaging problems

* Multisensor Data Fusion: From Algorithm and Architecture Design to Applications, CRC Press, 2015. arXiv admin note: substantial text overlap with arXiv:1401.0166
Click to Read Paper and Get Code
The sparse, hierarchical, and modular processing of natural signals is related to the ability of humans to recognize objects with high accuracy. In this study, we report a sparse feature processing and encoding method, which improved the recognition performance of an automated object recognition system. Randomly distributed localized gradient enhanced features were selected before employing aggregate functions for representation, where we used a modular and hierarchical approach to detect the object features. These object features were combined with a minimum distance classifier, thereby obtaining object recognition system accuracies of 93% using the Amsterdam library of object images (ALOI) database, 92% using the Columbia object image library (COIL)-100 database, and 69% using the PASCAL visual object challenge 2007 database. The object recognition performance was shown to be robust to variations in noise, object scaling, and object shifts. Finally, a comparison with eight existing object recognition methods indicated that our new method improved the recognition accuracy by 10% with ALOI, 8% with the COIL-100 database, and 10% with the PASCAL visual object challenge 2007 database.

* Pattern Recognition, Available online 31 October 2014
* Pages 13
Click to Read Paper and Get Code
The proposed feature selection method builds a histogram of the most stable features from random subsets of a training set and ranks the features based on a classifier based cross-validation. This approach reduces the instability of features obtained by conventional feature selection methods that occur with variation in training data and selection criteria. Classification results on four microarray and three image datasets using three major feature selection criteria and a naive Bayes classifier show considerable improvement over benchmark results.

* Electronics Letters,47, 8, 490-491, 2011
Click to Read Paper and Get Code
Examplers of a face are formed from multiple gallery images of a person and are used in the process of classification of a test image. We incorporate such examplers in forming a biologically inspired local binary decisions on similarity based face recognition method. As opposed to single model approaches such as face averages the exampler based approach results in higher recognition accu- racies and stability. Using multiple training samples per person, the method shows the following recognition accuracies: 99.0% on AR, 99.5% on FERET, 99.5% on ORL, 99.3% on EYALE, 100.0% on YALE and 100.0% on CALTECH face databases. In addition to face recognition, the method also detects the natural variability in the face images which can find application in automatic tagging of face images.

Click to Read Paper and Get Code
Feature selection is an important problem in high-dimensional data analysis and classification. Conventional feature selection approaches focus on detecting the features based on a redundancy criterion using learning and feature searching schemes. In contrast, we present an approach that identifies the need to select features based on their discriminatory ability among classes. Area of overlap between inter-class and intra-class distances resulting from feature to feature comparison of an attribute is used as a measure of discriminatory ability of the feature. A set of nearest attributes in a pattern having the lowest area of overlap within a degree of tolerance defined by a selection threshold is selected to represent the best available discriminable features. State of the art recognition results are reported for pattern classification problems by using the proposed feature selection scheme with the nearest neighbour classifier. These results are reported with benchmark databases having high dimensional feature vectors in the problems involving images and micro array data.

Click to Read Paper and Get Code
A resistive memory network that has no crossover wiring is proposed to overcome the hardware limitations to size and functional complexity that is associated with conventional analogue neural networks. The proposed memory network is based on simple network cells that are arranged in a hierarchical modular architecture. Cognitive functionality of this network is demonstrated by an example of character recognition. The network is trained by an evolutionary process to completely recognise characters deformed by random noise, rotation, scaling and shifting

* Electronics Letters,46, 10, 677 - 678, 2010
Click to Read Paper and Get Code
The inability of automated edge detection methods inspired from primal sketch models to accurately calculate object edges under the influence of pixel noise is an open problem. Extending the principles of image perception i.e. Weber-Fechner law, and Sheperd similarity law, we propose a new edge detection method and formulation that use perceived brightness and neighbourhood similarity calculations in the determination of robust object edges. The robustness of the detected edges is benchmark against Sobel, SIS, Kirsch, and Prewitt edge detection methods in an example face recognition problem showing statistically significant improvement in recognition accuracy and pixel noise tolerance.

* Volume: 22 Issue: 9 On page(s): 1336-1339, 2015
* accepted for publication in IEEE Signal Processing Letters, 2015
Click to Read Paper and Get Code
Analysis of time-series data allows to identify long-term trends and make predictions that can help to improve our lives. With the rapid development of artificial neural networks, long short-term memory (LSTM) recurrent neural network (RNN) configuration is found to be capable in dealing with time-series forecasting problems where data points are time-dependent and possess seasonality trends. Gated structure of LSTM cell and flexibility in network topology (one-to-many, many-to-one, etc.) allows to model systems with multiple input variables and control several parameters such as the size of the look-back window to make a prediction and number of time steps to be predicted. These make LSTM attractive tool over conventional methods such as autoregression models, the simple average, moving average, naive approach, ARIMA, Holt's linear trend method, Holt's Winter seasonal method, and others. In this paper, we propose a hardware implementation of LSTM network architecture for time-series forecasting problem. All simulations were performed using TSMC 0.18um CMOS technology and HP memristor model.

* IEEE Asia Pacific Conference on Circuits and Systems, 2018
Click to Read Paper and Get Code
The automated segmentation of cells in microscopic images is an open research problem that has important implications for studies of the developmental and cancer processes based on in vitro models. In this paper, we present the approach for segmentation of the DIC images of cultured cells using G-neighbor smoothing followed by Kauwahara filtering and local standard deviation approach for boundary detection. NIH FIJI/ImageJ tools are used to create the ground truth dataset. The results of this work indicate that detection of cell boundaries using segmentation approach even in the case of realistic measurement conditions is a challenging problem.

* 2017 International Conference on Advances in Computing, Communications and Informatics (ICACCI), Udupi, 2017, pp. 2382-2387
Click to Read Paper and Get Code
High voltage insulators are widely deployed in power systems to isolate the live- and dead-part of overhead lines as well as to support the power line conductors mechanically. Permanent, secure and safe operation of power transmission lines require that the high voltage insulators are inspected and monitor, regularly. Severe environment conditions will influence insulator surface and change creepage distance. Consequently, power utilities and transmission companies face significant problem in operation due to insulator damage or contamination. In this study, a new technique is developed for real-time inspection of insulator and estimating the snow, ice and water over the insulator surface which can be a potential risk of operation breakdown. To examine the proposed system, practical experiment is conducted using ceramic insulator for capturing the images with snow, ice and wet surface conditions. Gabor and Standard deviation filters are utilized for image feature extraction. The best achieved recognition accuracy rate was 87% using statistical approach the Standard deviation.

* 2017 International Symposium on Electrical Insulating Materials, September 12-15, 2017
Click to Read Paper and Get Code
A non-invasive method for the monitoring of heart activity can help to reduce the deaths caused by heart disorders such as stroke, arrhythmia and heart attack. The human voice can be considered as a biometric data that can be used for estimation of heart rate. In this paper, we propose a method for estimating the heart rate from human speech dynamically using voice signal analysis and by the development of an empirical linear predictor model. The correlation between the voice signal and heart rate are established by classifiers and prediction of the heart rates with or without emotions are done using linear models. The prediction accuracy was tested using the data collected from 15 subjects, it is about 4050 samples of speech signals and corresponding electrocardiogram samples. The proposed approach can use for early non-invasive detection of heart rate changes that can be correlated to an emotional state of the individual and also can be used as a tool for diagnosis of heart conditions in real-time situations.

* to appear in Proceedings of International Conference on Advances in Computing, Communications and Informatics, IEEE, 2016
Click to Read Paper and Get Code
The square and rectangular shape of the pixels in the digital images for sensing and display purposes introduces several inaccuracies in the representation of digital images. The major disadvantage of square pixel shapes is the inability to accurately capture and display the details in the objects having variable orientations to edges, shapes and regions. This effect can be observed by the inaccurate representation of diagonal edges in low resolution square pixel images. This paper explores a less investigated idea of using variable shaped pixels for improving visual quality of image scans without increasing the square pixel resolution. The proposed adaptive filtering technique reports an improvement in image PSNR.

* 4th International Conference on Advances in Computing, Communications and Informatics, August, 2015
Click to Read Paper and Get Code