Research papers and code for "Benyuan Liu":
We are interested in the optimal scheduling of a collection of multi-component application jobs in an edge computing system that consists of geo-distributed edge computing nodes connected through a wide area network. The scheduling and placement of application jobs in an edge system is challenging due to the interdependence of multiple components of each job, and the communication delays between the geographically distributed data sources and edge nodes and their dynamic availability. In this paper we explore the feasibility of applying Deep Reinforcement Learning (DRL) based design to address these challenges. We introduce a DRL actor-critic algorithm that aims to find an optimal scheduling policy to minimize average job slowdown in the edge system. We have demonstrated through simulations that our design outperforms a few existing algorithms, based on both synthetic data and a Google cloud data trace.

Click to Read Paper and Get Code
We present a 3D Convolutional Neural Networks (CNNs) based single shot detector for spatial-temporal action detection tasks. Our model includes: (1) two short-term appearance and motion streams, with single RGB and optical flow image input separately, in order to capture the spatial and temporal information for the current frame; (2) two long-term 3D ConvNet based stream, working on sequences of continuous RGB and optical flow images to capture the context from past frames. Our model achieves strong performance for action detection in video and can be easily integrated into any current two-stream action detection methods. We report a frame-mAP of 71.30% on the challenging UCF101-24 actions dataset, achieving the state-of-the-art result of the one-stage methods. To the best of our knowledge, our work is the first system that combined 3D CNN and SSD in action detection tasks.

* 26th IEEE International Conference on Image Processing (ICIP 2019)
Click to Read Paper and Get Code
The performance of sparse signal recovery from noise corrupted, underdetermined measurements can be improved if both sparsity and correlation structure of signals are exploited. One typical correlation structure is the intra-block correlation in block sparse signals. To exploit this structure, a framework, called block sparse Bayesian learning (BSBL), has been proposed recently. Algorithms derived from this framework showed superior performance but they are not very fast, which limits their applications. This work derives an efficient algorithm from this framework, using a marginalized likelihood maximization method. Compared to existing BSBL algorithms, it has close recovery performance but is much faster. Therefore, it is more suitable for large scale datasets and applications requiring real-time implementation.

Click to Read Paper and Get Code
Compressed Sensing based Terahertz imaging (CS-THz) is a computational imaging technique. It uses only one THz receiver to accumulate the random modulated image measurements where the original THz image is reconstruct from these measurements using compressed sensing solvers. The advantage of the CS-THz is its reduced acquisition time compared with the raster scan mode. However, when it applied to large-scale two-dimensional (2D) imaging, the increased dimension resulted in both high computational complexity and excessive memory usage. In this paper, we introduced a novel CS-based THz imaging system that progressively compressed the THz image column by column. Therefore, the CS-THz system could be simplified with a much smaller sized modulator and reduced dimension. In order to utilize the block structure and the correlation of adjacent columns of the THz image, a complex-valued block sparse Bayesian learning algorithm was proposed. We conducted systematic evaluation of state-of-the-art CS algorithms under the scan based CS-THz architecture. The compression ratios and the choices of the sensing matrices were analyzed in detail using both synthetic and real-life THz images. Simulation results showed that both the scan based architecture and the proposed recovery algorithm were superior and efficient for large scale CS-THz applications.

Click to Read Paper and Get Code
In this paper we propose a two-level hierarchical Bayesian model and an annealing schedule to re-enable the noise variance learning capability of the fast marginalized Sparse Bayesian Learning Algorithms. The performance such as NMSE and F-measure can be greatly improved due to the annealing technique. This algorithm tends to produce the most sparse solution under moderate SNR scenarios and can outperform most concurrent SBL algorithms while pertains small computational load.

* The update equation in the annealing process was too empirical for practical usage. This paper need to be revised in order to be printed on the arxiv.org
Click to Read Paper and Get Code
Colorectal cancer (CRC) is a common and lethal disease. Globally, CRC is the third most commonly diagnosed cancer in males and the second in females. For colorectal cancer, the best screening test available is the colonoscopy. During a colonoscopic procedure, a tiny camera at the tip of the endoscope generates a video of the internal mucosa of the colon. The video data are displayed on a monitor for the physician to examine the lining of the entire colon and check for colorectal polyps. Detection and removal of colorectal polyps are associated with a reduction in mortality from colorectal cancer. However, the miss rate of polyp detection during colonoscopy procedure is often high even for very experienced physicians. The reason lies in the high variation of polyp in terms of shape, size, textural, color and illumination. Though challenging, with the great advances in object detection techniques, automated polyp detection still demonstrates a great potential in reducing the false negative rate while maintaining a high precision. In this paper, we propose a novel anchor free polyp detector that can localize polyps without using predefined anchor boxes. To further strengthen the model, we leverage a Context Enhancement Module and Cosine Ground truth Projection. Our approach can respond in real time while achieving state-of-the-art performance with 99.36% precision and 96.44% recall.

Click to Read Paper and Get Code
Lesions are injuries and abnormal tissues in the human body. Detecting lesions in 3D Computed Tomography (CT) scans can be time-consuming even for very experienced physicians and radiologists. In recent years, CNN based lesion detectors have demonstrated huge potentials. Most of current state-of-the-art lesion detectors employ anchors to enumerate all possible bounding boxes with respect to the dataset in process. This anchor mechanism greatly improves the detection performance while also constraining the generalization ability of detectors. In this paper, we propose an anchor-free lesion detector. The anchor mechanism is removed and lesions are formalized as single keypoints. By doing so, we witness a considerable performance gain in terms of both accuracy and inference speed compared with the anchor-based baseline

Click to Read Paper and Get Code