Models, code, and papers for "Dusit Niyato":

Optimal Stochastic Package Delivery Planning with Deadline: A Cardinality Minimization in Routing

Feb 28, 2018
Suttinee Sawadsitang, Siwei Jiang, Dusit Niyato, Ping Wang

Vehicle Routing Problem with Private fleet and common Carrier (VRPPC) has been proposed to help a supplier manage package delivery services from a single depot to multiple customers. Most of the existing VRPPC works consider deterministic parameters which may not be practical and uncertainty has to be taken into account. In this paper, we propose the Optimal Stochastic Delivery Planning with Deadline (ODPD) to help a supplier plan and optimize the package delivery. The aim of ODPD is to service all customers within a given deadline while considering the randomness in customer demands and traveling time. We formulate the ODPD as a stochastic integer programming, and use the cardinality minimization approach for calculating the deadline violation probability. To accelerate computation, the L-shaped decomposition method is adopted. We conduct extensive performance evaluation based on real customer locations and traveling time from Google Map.

* 7 pages, 6 figures, Vehicular Technology Conference (VTC fall), 2017 IEEE 86th 

  Click for Model/Code and Paper
Re-route Package Pickup and Delivery Planning with Random Demands

Jul 24, 2019
Suttinee Sawadsitang, Dusit Niyato, Kongrath Suankaewmanee, Puay Siew Tan

Recently, a higher competition in logistics business introduces new challenges to the vehicle routing problem (VRP). Re-route planning, also known as dynamic VRP, is one of the important challenges. The re-route planning has to be performed when new customers request for deliveries while the delivery vehicles, i.e., trucks, are serving other customers. While the re-route planning has been studied in the literature, most of the existing works do not consider different uncertainties. Therefore, in this paper, we propose two systems, i.e., (i) an offline package pickup and delivery planning with stochastic demands (PDPSD) and (ii) a re-route package pickup and delivery planning with stochastic demands (Re-route PDPSD). Accordingly, we formulate the PDPSD system as a two-stage stochastic optimization. We then extend the PDPSD system to the Re-route PDPSD system with a re-route algorithm. Furthermore, we evaluate performance of the proposed systems by using the dataset from Solomon Benchmark suite and a real data from a Singapore logistics 1company. The results show that the PDPSD system can achieve the lower cost than that of the baseline model. In addition, the Re-route PDPSD system can help the supplier efficiently and successfully to serve more customers while the trucks are already on the road.

* 2019 IEEE 90th Vehicular Technology Conference: VTC2019-Fall 
* 6 pages, 4 figures, 2 tables 

  Click for Model/Code and Paper
Mobile Edge Computing, Blockchain and Reputation-based Crowdsourcing IoT Federated Learning: A Secure, Decentralized and Privacy-preserving System

Jun 26, 2019
Yang Zhao, Jun Zhao, Linshan Jiang, Rui Tan, Dusit Niyato

Internet-of-Things (IoT) companies strive to get feedback from users to improve their products and services. However, traditional surveys cannot reflect the actual conditions of customers' due to the limited questions. Besides, survey results are affected by various subjective factors. In contrast, the recorded usages of IoT devices reflect customers' behaviours more comprehensively and accurately. We design an intelligent system to help IoT device manufacturers to take advantage of customers' data and build a machine learning model to predict customers' requirements and possible consumption behaviours with federated learning (FL) technology. The FL consists of two stages: in the first stage, customers train the initial model using the phone and the edge computing server collaboratively. The mobile edge computing server's high computation power can assist customers' training locally. Customers first collect data from various IoT devices using phones, and then download and train the initial model with their data. During the training, customers first extract features using their mobiles, and then add the Laplacian noise to the extracted features based on differential privacy, a formal and popular notion to quantify privacy. After achieving the local model, customers sign on their models respectively and send them to the blockchain. We use the blockchain to replace the centralized aggregator which belongs to the third party in FL. In the second stage, miners calculate the averaged model using the collected models sent from customers. By the end of the crowdsourcing job, one of the miners, who is selected as the temporary leader, uploads the model to the blockchain. Besides, to attract more customers to participate in the crowdsourcing FL, we design an incentive mechanism, which awards participants with coins that can be used to purchase other services provided by the company.


  Click for Model/Code and Paper
Distributed Learning for Channel Allocation Over a Shared Spectrum

Feb 19, 2019
S. M. Zafaruddin, Ilai Bistritz, Amir Leshem, Dusit Niyato

Channel allocation is the task of assigning channels to users such that some objective (e.g., sum-rate) is maximized. In centralized networks such as cellular networks, this task is carried by the base station which gathers the channel state information (CSI) from the users and computes the optimal solution. In distributed networks such as ad-hoc and device-to-device (D2D) networks, no base station exists and conveying global CSI between users is costly or simply impractical. When the CSI is time varying and unknown to the users, the users face the challenge of both learning the channel statistics online and converge to a good channel allocation. This introduces a multi-armed bandit (MAB) scenario with multiple decision makers. If two users or more choose the same channel, a collision occurs and they all receive zero reward. We propose a distributed channel allocation algorithm that each user runs and converges to the optimal allocation while achieving an order optimal regret of O\left(\log T\right). The algorithm is based on a carrier sensing multiple access (CSMA) implementation of the distributed auction algorithm. It does not require any exchange of information between users. Users need only to observe a single channel at a time and sense if there is a transmission on that channel, without decoding the transmissions or identifying the transmitting users. We demonstrate the performance of our algorithm using simulated LTE and 5G channels.


  Click for Model/Code and Paper
Joint Ground and Aerial Package Delivery Services: A Stochastic Optimization Approach

Aug 15, 2018
Suttinee Sawadsitang, Dusit Niyato, Puay-Siew Tan, Ping Wang

Unmanned aerial vehicles (UAVs), also known as drones, have emerged as a promising mode of fast, energy-efficient, and cost-effective package delivery. A considerable number of works have studied different aspects of drone package delivery service by a supplier, one of which is delivery planning. However, existing works addressing the planning issues consider a simple case of perfect delivery without service interruption, e.g., due to accident which is common and realistic. Therefore, this paper introduces the joint ground and aerial delivery service optimization and planning (GADOP) framework. The framework explicitly incorporates uncertainty of drone package delivery, i.e., takeoff and breakdown conditions. The GADOP framework aims to minimize the total delivery cost given practical constraints, e.g., traveling distance limit. Specifically, we formulate the GADOP framework as a three-stage stochastic integer programming model. To deal with the high complexity issue of the problem, a decomposition method is adopted. Then, the performance of the GADOP framework is evaluated by using two data sets including Solomon benchmark suite and the real data from one of the Singapore logistics companies. The performance evaluation clearly shows that the GADOP framework can achieve significantly lower total payment than that of the baseline methods which do not take uncertainty into account.

* Transactions on Intelligent Transportation Systems 2018 
* 14 pages, 15 figures, Accepted as REGULAR PAPER 

  Click for Model/Code and Paper
Optimal Stochastic Delivery Planning in Full-Truckload and Less-Than-Truckload Delivery

Feb 04, 2018
Suttinee Sawadsitang, Rakpong Kaewpuang, Siwei Jiang, Dusit Niyato, Ping Wang

With an increasing demand from emerging logistics businesses, Vehicle Routing Problem with Private fleet and common Carrier (VRPPC) has been introduced to manage package delivery services from a supplier to customers. However, almost all of existing studies focus on the deterministic problem that assumes all parameters are known perfectly at the time when the planning and routing decisions are made. In reality, some parameters are random and unknown. Therefore, in this paper, we consider VRPPC with hard time windows and random demand, called Optimal Delivery Planning (ODP). The proposed ODP aims to minimize the total package delivery cost while meeting the customer time window constraints. We use stochastic integer programming to formulate the optimization problem incorporating the customer demand uncertainty. Moreover, we evaluate the performance of the ODP using test data from benchmark dataset and from actual Singapore road map.

* 5 pages, 6 figures, Vehicular Technology Conference (VTC Spring), 2017 IEEE 85th 

  Click for Model/Code and Paper
Toward a Robust Sparse Data Representation for Wireless Sensor Networks

Aug 02, 2015
Mohammad Abu Alsheikh, Shaowei Lin, Hwee-Pink Tan, Dusit Niyato

Compressive sensing has been successfully used for optimized operations in wireless sensor networks. However, raw data collected by sensors may be neither originally sparse nor easily transformed into a sparse data representation. This paper addresses the problem of transforming source data collected by sensor nodes into a sparse representation with a few nonzero elements. Our contributions that address three major issues include: 1) an effective method that extracts population sparsity of the data, 2) a sparsity ratio guarantee scheme, and 3) a customized learning algorithm of the sparsifying dictionary. We introduce an unsupervised neural network to extract an intrinsic sparse coding of the data. The sparse codes are generated at the activation of the hidden layer using a sparsity nomination constraint and a shrinking mechanism. Our analysis using real data samples shows that the proposed method outperforms conventional sparsity-inducing methods.

* IEEE 40th Conference on Local Computer Networks (LCN), Clearwater Beach, FL, 2015, pp. 117-124 
* 8 pages 

  Click for Model/Code and Paper
Machine Learning in Wireless Sensor Networks: Algorithms, Strategies, and Applications

Mar 19, 2015
Mohammad Abu Alsheikh, Shaowei Lin, Dusit Niyato, Hwee-Pink Tan

Wireless sensor networks monitor dynamic environments that change rapidly over time. This dynamic behavior is either caused by external factors or initiated by the system designers themselves. To adapt to such conditions, sensor networks often adopt machine learning techniques to eliminate the need for unnecessary redesign. Machine learning also inspires many practical solutions that maximize resource utilization and prolong the lifespan of the network. In this paper, we present an extensive literature review over the period 2002-2013 of machine learning methods that were used to address common issues in wireless sensor networks (WSNs). The advantages and disadvantages of each proposed algorithm are evaluated against the corresponding problem. We also provide a comparative guide to aid WSN designers in developing suitable machine learning solutions for their specific application challenges.

* IEEE Communications Surveys & Tutorials, vol. 16, no. 4, pp. 1996-2018, Fourthquarter 2014 
* Accepted for publication in IEEE Communications Surveys and Tutorials 

  Click for Model/Code and Paper
Deep Reinforcement Learning for Time Scheduling in RF-Powered Backscatter Cognitive Radio Networks

Oct 03, 2018
Tran The Anh, Nguyen Cong Luong, Dusit Niyato, Ying-Chang Liang, Dong In Kim

In an RF-powered backscatter cognitive radio network, multiple secondary users communicate with a secondary gateway by backscattering or harvesting energy and actively transmitting their data depending on the primary channel state. To coordinate the transmission of multiple secondary transmitters, the secondary gateway needs to schedule the backscattering time, energy harvesting time, and transmission time among them. However, under the dynamics of the primary channel and the uncertainty of the energy state of the secondary transmitters, it is challenging for the gateway to find a time scheduling mechanism which maximizes the total throughput. In this paper, we propose to use the deep reinforcement learning algorithm to derive an optimal time scheduling policy for the gateway. Specifically, to deal with the problem with large state and action spaces, we adopt a Double Deep-Q Network (DDQN) that enables the gateway to learn the optimal policy. The simulation results clearly show that the proposed deep reinforcement learning algorithm outperforms non-learning schemes in terms of network throughput.


  Click for Model/Code and Paper
Mobile Big Data Analytics Using Deep Learning and Apache Spark

Feb 23, 2016
Mohammad Abu Alsheikh, Dusit Niyato, Shaowei Lin, Hwee-Pink Tan, Zhu Han

The proliferation of mobile devices, such as smartphones and Internet of Things (IoT) gadgets, results in the recent mobile big data (MBD) era. Collecting MBD is unprofitable unless suitable analytics and learning methods are utilized for extracting meaningful information and hidden patterns from data. This article presents an overview and brief tutorial of deep learning in MBD analytics and discusses a scalable learning framework over Apache Spark. Specifically, a distributed deep learning is executed as an iterative MapReduce computing on many Spark workers. Each Spark worker learns a partial deep model on a partition of the overall MBD, and a master deep model is then built by averaging the parameters of all partial models. This Spark-based framework speeds up the learning of deep models consisting of many hidden layers and millions of parameters. We use a context-aware activity recognition application with a real-world dataset containing millions of samples to validate our framework and assess its speedup effectiveness.

* IEEE Network, vol. 30, no. 3, pp. 22-29, June 2016 

  Click for Model/Code and Paper
Incentive Design for Efficient Federated Learning in Mobile Networks: A Contract Theory Approach

May 16, 2019
Jiawen Kang, Zehui Xiong, Dusit Niyato, Han Yu, Ying-Chang Liang, Dong In Kim

To strengthen data privacy and security, federated learning as an emerging machine learning technique is proposed to enable large-scale nodes, e.g., mobile devices, to distributedly train and globally share models without revealing their local data. This technique can not only significantly improve privacy protection for mobile devices, but also ensure good performance of the trained results collectively. Currently, most the existing studies focus on optimizing federated learning algorithms to improve model training performance. However, incentive mechanisms to motivate the mobile devices to join model training have been largely overlooked. The mobile devices suffer from considerable overhead in terms of computation and communication during the federated model training process. Without well-designed incentive, self-interested mobile devices will be unwilling to join federated learning tasks, which hinders the adoption of federated learning. To bridge this gap, in this paper, we adopt the contract theory to design an effective incentive mechanism for simulating the mobile devices with high-quality (i.e., high-accuracy) data to participate in federated learning. Numerical results demonstrate that the proposed mechanism is efficient for federated learning with improved learning accuracy.

* submitted to the conference for potential publication 

  Click for Model/Code and Paper
Cyberattack Detection in Mobile Cloud Computing: A Deep Learning Approach

Dec 16, 2017
Khoi Khac Nguyen, Dinh Thai Hoang, Dusit Niyato, Ping Wang, Diep Nguyen, Eryk Dutkiewicz

With the rapid growth of mobile applications and cloud computing, mobile cloud computing has attracted great interest from both academia and industry. However, mobile cloud applications are facing security issues such as data integrity, users' confidentiality, and service availability. A preventive approach to such problems is to detect and isolate cyber threats before they can cause serious impacts to the mobile cloud computing system. In this paper, we propose a novel framework that leverages a deep learning approach to detect cyberattacks in mobile cloud environment. Through experimental results, we show that our proposed framework not only recognizes diverse cyberattacks, but also achieves a high accuracy (up to 97.11%) in detecting the attacks. Furthermore, we present the comparisons with current machine learning-based approaches to demonstrate the effectiveness of our proposed solution.

* 6 pages, 3 figures, 1 table, WCNC 2018 conference 

  Click for Model/Code and Paper
Convergence of Edge Computing and Deep Learning: A Comprehensive Survey

Jul 19, 2019
Yiwen Han, Xiaofei Wang, Victor C. M. Leung, Dusit Niyato, Xueqiang Yan, Xu Chen

Ubiquitous sensors and smart devices from factories and communities guarantee massive amounts of data and ever-increasing computing power is driving the core of computation and services from the cloud to the edge of the network. As an important enabler broadly changing people's lives, from face recognition to ambitious smart factories and cities, artificial intelligence (especially deep learning) applications and services have experienced a thriving development process. However, due to efficiency and latency issues, the current cloud computing service architecture hinders the vision of "providing artificial intelligence for every person and every organization at everywhere". Thus, recently, a better solution is unleashing deep learning services from the cloud to the edge near to data sources. Therefore, edge intelligence, aiming to facilitate the deployment of deep learning services by edge computing, has received great attention. In addition, deep learning, as the main representative of artificial intelligence, can be integrated into edge computing frameworks to build intelligent edge for dynamic, adaptive edge maintenance and management. With regard to mutually benefited edge intelligence and intelligent edge, this paper introduces and discusses: 1) the application scenarios of both; 2) the practical implementation methods and enabling technologies, namely deep learning training and inference in the customized edge computing framework; 3) existing challenges and future trends of more pervasive and fine-grained intelligence. We believe that this survey can help readers to garner information scattered across the communication, networking, and deep learning, understand the connections between enabling technologies, and promotes further discussions on the fusion of edge intelligence and intelligent edge.

* This paper has been submitted to IEEE Communications Surveys and Tutorials for possible publication 

  Click for Model/Code and Paper
Optimal and Low-Complexity Dynamic Spectrum Access for RF-Powered Ambient Backscatter System with Online Reinforcement Learning

Sep 08, 2018
Nguyen Van Huynh, Dinh Thai Hoang, Diep N. Nguyen, Eryk Dutkiewicz, Dusit Niyato, Ping Wang

Ambient backscatter has been introduced with a wide range of applications for low power wireless communications. In this article, we propose an optimal and low-complexity dynamic spectrum access framework for RF-powered ambient backscatter system. In this system, the secondary transmitter not only harvests energy from ambient signals (from incumbent users), but also backscatters these signals to its receiver for data transmission. Under the dynamics of the ambient signals, we first adopt the Markov decision process (MDP) framework to obtain the optimal policy for the secondary transmitter, aiming to maximize the system throughput. However, the MDP-based optimization requires complete knowledge of environment parameters, e.g., the probability of a channel to be idle and the probability of a successful packet transmission, that may not be practical to obtain. To cope with such incomplete knowledge of the environment, we develop a low-complexity online reinforcement learning algorithm that allows the secondary transmitter to "learn" from its decisions and then attain the optimal policy. Simulation results show that the proposed learning algorithm not only efficiently deals with the dynamics of the environment, but also improves the average throughput up to 50% and reduces the blocking probability and delay up to 80% compared with conventional methods.

* 30 pages, 9 figures, journal paper 

  Click for Model/Code and Paper
Deep Activity Recognition Models with Triaxial Accelerometers

Oct 25, 2016
Mohammad Abu Alsheikh, Ahmed Selim, Dusit Niyato, Linda Doyle, Shaowei Lin, Hwee-Pink Tan

Despite the widespread installation of accelerometers in almost all mobile phones and wearable devices, activity recognition using accelerometers is still immature due to the poor recognition accuracy of existing recognition methods and the scarcity of labeled training data. We consider the problem of human activity recognition using triaxial accelerometers and deep learning paradigms. This paper shows that deep activity recognition models (a) provide better recognition accuracy of human activities, (b) avoid the expensive design of handcrafted features in existing systems, and (c) utilize the massive unlabeled acceleration samples for unsupervised feature extraction. Moreover, a hybrid approach of deep learning and hidden Markov models (DL-HMM) is presented for sequential activity recognition. This hybrid approach integrates the hierarchical representations of deep activity recognition models with the stochastic modeling of temporal sequences in the hidden Markov models. We show substantial recognition improvement on real world datasets over state-of-the-art methods of human activity recognition using triaxial accelerometers.


  Click for Model/Code and Paper
Applications of Deep Reinforcement Learning in Communications and Networking: A Survey

Oct 18, 2018
Nguyen Cong Luong, Dinh Thai Hoang, Shimin Gong, Dusit Niyato, Ping Wang, Ying-Chang Liang, Dong In Kim

This paper presents a comprehensive literature review on applications of deep reinforcement learning in communications and networking. Modern networks, e.g., Internet of Things (IoT) and Unmanned Aerial Vehicle (UAV) networks, become more decentralized and autonomous. In such networks, network entities need to make decisions locally to maximize the network performance under uncertainty of network environment. Reinforcement learning has been efficiently used to enable the network entities to obtain the optimal policy including, e.g., decisions or actions, given their states when the state and action spaces are small. However, in complex and large-scale networks, the state and action spaces are usually large, and the reinforcement learning may not be able to find the optimal policy in reasonable time. Therefore, deep reinforcement learning, a combination of reinforcement learning with deep learning, has been developed to overcome the shortcomings. In this survey, we first give a tutorial of deep reinforcement learning from fundamental concepts to advanced models. Then, we review deep reinforcement learning approaches proposed to address emerging issues in communications and networking. The issues include dynamic network access, data rate control, wireless caching, data offloading, network security, and connectivity preservation which are all important to next generation networks such as 5G and beyond. Furthermore, we present applications of deep reinforcement learning for traffic routing, resource sharing, and data collection. Finally, we highlight important challenges, open issues, and future research directions of applying deep reinforcement learning.

* 37 pages, 13 figures, 6 tables, 174 reference papers 

  Click for Model/Code and Paper