Models, code, and papers for "Nandana Rajatheva":

Low Complexity Autoencoder based End-to-End Learning of Coded Communications Systems

Nov 19, 2019
Nuwanthika Rajapaksha, Nandana Rajatheva, Matti Latva-aho

End-to-end learning of a communications system using the deep learning-based autoencoder concept has drawn interest in recent research due to its simplicity, flexibility and its potential of adapting to complex channel models and practical system imperfections. In this paper, we have compared the bit error rate (BER) performance of autoencoder based systems and conventional channel coded systems with convolutional coding, in order to understand the potential of deep learning-based systems as alternatives to conventional systems. From the simulations, autoencoder implementation was observed to have a better BER in 0-5 dB $E_{b}/N_{0}$ range than its equivalent half-rate convolutional coded BPSK with hard decision decoding, and to have only less than 1 dB gap at a BER of $10^{-5}$. Furthermore, we have also proposed a novel low complexity autoencoder architecture to implement end-to-end learning of coded systems in which we have shown better BER performance than the baseline implementation. The newly proposed low complexity autoencoder was capable of achieving a better BER performance than half-rate 16-QAM with hard decision decoding over the full 0-10 dB $E_{b}/N_{0}$ range and a better BER performance than the soft decision decoding in 0-4 dB $E_{b}/N_{0}$ range.

* 6 pages. Submitted to IEEE WCNC 2020 

  Access Model/Code and Paper
Autonomous Driving without a Burden: View from Outside with Elevated LiDAR

Oct 31, 2018
Nalin Jayaweera, Nandana Rajatheva, Matti Latva-aho

The current autonomous driving architecture places a heavy burden in signal processing for the graphics processing units (GPUs) in the car. This directly translates into battery drain and lower energy efficiency, crucial factors in electric vehicles. This is due to the high bit rate of the captured video and other sensing inputs, mainly due to Light Detection and Ranging (LiDAR) sensor at the top of the car which is an essential feature in autonomous vehicles. LiDAR is needed to obtain a high precision map for the vehicle AI to make relevant decisions. However, this is still a quite restricted view from the car. This is the same even in the case of cars without a LiDAR such as Tesla. The existing LiDARs and the cameras have limited horizontal and vertical fields of visions. In all cases it can be argued that precision is lower, given the smaller map generated. This also results in the accumulation of a large amount of data in the order of several TBs in a day, the storage of which becomes challenging. If we are to reduce the effort for the processing units inside the car, we need to uplink the data to edge or an appropriately placed cloud. However, the required data rates in the order of several Gbps are difficult to be met even with the advent of 5G. Therefore, we propose to have a coordinated set of LiDAR's outside at an elevation which can provide an integrated view with a much larger field of vision (FoV) to a centralized decision making body which then sends the required control actions to the vehicles with a lower bit rate in the downlink and with the required latency. The calculations we have based on industry standard equipment from several manufacturers show that this is not just a concept but a feasible system which can be implemented.The proposed system can play a supportive role with existing autonomous vehicle architecture and it is easily applicable in an urban area.

  Access Model/Code and Paper