Models, code, and papers for "Mehmet Turkan":

Boosting Dictionary Learning with Error Codes

Jan 15, 2017
Yigit Oktar, Mehmet Turkan

In conventional sparse representations based dictionary learning algorithms, initial dictionaries are generally assumed to be proper representatives of the system at hand. However, this may not be the case, especially in some systems restricted to random initializations. Therefore, a supposedly optimal state-update based on such an improper model might lead to undesired effects that will be conveyed to successive iterations. In this paper, we propose a dictionary learning method which includes a general feedback process that codes the intermediate error left over from a less intensive initial learning attempt, and then adjusts sparse codes accordingly. Experimental observations show that such an additional step vastly improves rates of convergence in high-dimensional cases, also results in better converged states in the case of random initializations. Improvements also scale up with more lenient sparsity constraints.

* 5 pages, 5 figures 

  Click for Model/Code and Paper
Autonomous Cars: Vision based Steering Wheel Angle Estimation

Jan 30, 2019
Kemal Alkin Gunbay, Mert Arikan, Mehmet Turkan

Machine learning models, which are frequently used in self-driving cars, are trained by matching the captured images of the road and the measured angle of the steering wheel. The angle of the steering wheel is generally fetched from steering angle sensor, which is tightly-coupled to the physical aspects of the vehicle at hand. Therefore, a model-agnostic autonomous car-kit is very difficult to be developed and autonomous vehicles need more training data. The proposed vision based steering angle estimation system argues a new approach which basically matches the images of the road captured by an outdoor camera and the images of the steering wheel from an onboard camera, avoiding the burden of collecting model-dependent training data and the use of any other electromechanical hardware.

* 5 pages, 6 figures 

  Click for Model/Code and Paper
Convolutional Neural Networks: A Binocular Vision Perspective

Dec 21, 2019
Yigit Oktar, Diclehan Karakaya, Oguzhan Ulucan, Mehmet Turkan

It is arguable that whether the single camera captured (monocular) image datasets are sufficient enough to train and test convolutional neural networks (CNNs) for imitating the biological neural network structures of the human brain. As human visual system works in binocular, the collaboration of the eyes with the two brain lobes needs more investigation for improvements in such CNN-based visual imagery analysis applications. It is indeed questionable that if respective visual fields of each eye and the associated brain lobes are responsible for different learning abilities of the same scene. There are such open questions in this field of research which need rigorous investigation in order to further understand the nature of the human visual system, hence improve the currently available deep learning applications. This position paper analyses a binocular CNNs architecture that is more analogous to the biological structure of the human visual system than the conventional deep learning techniques. While taking a structure called optic chiasma into account, this architecture consists of basically two parallel CNN structures associated with each visual field and the brain lobe, fully connected later possibly as in the primary visual cortex (V1). Experimental results demonstrate that binocular learning of two different visual fields leads to better classification rates on average, when compared to classical CNN architectures.


  Click for Model/Code and Paper