Models, code, and papers for "Mehmet Turkan":

Boosting Dictionary Learning with Error Codes

Jan 15, 2017
Yigit Oktar, Mehmet Turkan

In conventional sparse representations based dictionary learning algorithms, initial dictionaries are generally assumed to be proper representatives of the system at hand. However, this may not be the case, especially in some systems restricted to random initializations. Therefore, a supposedly optimal state-update based on such an improper model might lead to undesired effects that will be conveyed to successive iterations. In this paper, we propose a dictionary learning method which includes a general feedback process that codes the intermediate error left over from a less intensive initial learning attempt, and then adjusts sparse codes accordingly. Experimental observations show that such an additional step vastly improves rates of convergence in high-dimensional cases, also results in better converged states in the case of random initializations. Improvements also scale up with more lenient sparsity constraints.

* 5 pages, 5 figures 

  Access Model/Code and Paper
The Mimicry Game: Towards Self-recognition in Chatbots

Feb 06, 2020
Yigit Oktar, Erdem Okur, Mehmet Turkan

In standard Turing test, a machine has to prove its humanness to the judges. By successfully imitating a thinking entity such as a human, this machine then proves that it can also think. However, many objections are raised against the validity of this argument. Such objections claim that Turing test is not a tool to demonstrate existence of general intelligence or thinking activity. In this light, alternatives to Turing test are to be investigated. Self-recognition tests applied on animals through mirrors appear to be a viable alternative to demonstrate the existence of a type of general intelligence. Methodology here constructs a textual version of the mirror test by placing the chatbot (in this context) as the one and only judge to figure out whether the contacted one is an other, a mimicker, or oneself in an unsupervised manner. This textual version of the mirror test is objective, self-contained, and is mostly immune to objections raised against the Turing test. Any chatbot passing this textual mirror test should have or acquire a thought mechanism that can be referred to as the inner-voice, answering the original and long lasting question of Turing "Can machines think?" in a constructive manner.


  Access Model/Code and Paper
Autonomous Cars: Vision based Steering Wheel Angle Estimation

Jan 30, 2019
Kemal Alkin Gunbay, Mert Arikan, Mehmet Turkan

Machine learning models, which are frequently used in self-driving cars, are trained by matching the captured images of the road and the measured angle of the steering wheel. The angle of the steering wheel is generally fetched from steering angle sensor, which is tightly-coupled to the physical aspects of the vehicle at hand. Therefore, a model-agnostic autonomous car-kit is very difficult to be developed and autonomous vehicles need more training data. The proposed vision based steering angle estimation system argues a new approach which basically matches the images of the road captured by an outdoor camera and the images of the steering wheel from an onboard camera, avoiding the burden of collecting model-dependent training data and the use of any other electromechanical hardware.

* 5 pages, 6 figures 

  Access Model/Code and Paper
Convolutional Neural Networks: A Binocular Vision Perspective

Dec 21, 2019
Yigit Oktar, Diclehan Karakaya, Oguzhan Ulucan, Mehmet Turkan

It is arguable that whether the single camera captured (monocular) image datasets are sufficient enough to train and test convolutional neural networks (CNNs) for imitating the biological neural network structures of the human brain. As human visual system works in binocular, the collaboration of the eyes with the two brain lobes needs more investigation for improvements in such CNN-based visual imagery analysis applications. It is indeed questionable that if respective visual fields of each eye and the associated brain lobes are responsible for different learning abilities of the same scene. There are such open questions in this field of research which need rigorous investigation in order to further understand the nature of the human visual system, hence improve the currently available deep learning applications. This position paper analyses a binocular CNNs architecture that is more analogous to the biological structure of the human visual system than the conventional deep learning techniques. While taking a structure called optic chiasma into account, this architecture consists of basically two parallel CNN structures associated with each visual field and the brain lobe, fully connected later possibly as in the primary visual cortex (V1). Experimental results demonstrate that binocular learning of two different visual fields leads to better classification rates on average, when compared to classical CNN architectures.


  Access Model/Code and Paper