Research papers and code for "Mohamed Abid":
Breast cancer is considered as one of a major health problem that constitutes the strongest cause behind mortality among women in the world. So, in this decade, breast cancer is the second most common type of cancer, in term of appearance frequency, and the fifth most common cause of cancer related death. In order to reduce the workload on radiologists, a variety of CAD systems; Computer-Aided Diagnosis (CADi) and Computer-Aided Detection (CADe) have been proposed. In this paper, we interested on CADe tool to help radiologist to detect cancer. The proposed CADe is based on a three-step work flow; namely, detection, analysis and classification. This paper deals with the problem of automatic detection of Region Of Interest (ROI) based on Level Set approach depended on edge and region criteria. This approach gives good visual information from the radiologist. After that, the features extraction using textures characteristics and the vector classification using Multilayer Perception (MLP) and k-Nearest Neighbours (KNN) are adopted to distinguish different ACR (American College of Radiology) classification. Moreover, we use the Digital Database for Screening Mammography (DDSM) for experiments and these results in term of accuracy varied between 60 % and 70% are acceptable and must be ameliorated to aid radiologist.

* 14 pages, 9 figures
Click to Read Paper and Get Code
We present a visual symptom checker that combines a pre-trained Convolutional Neural Network (CNN) with a Reinforcement Learning (RL) agent as a Question Answering (QA) model. This method increases the classification confidence and accuracy of the visual symptom checker, and decreases the average number of questions asked to narrow down the differential diagnosis. A Deep Q-Network (DQN)-based RL agent learns how to ask the patient about the presence of symptoms in order to maximize the probability of correctly identifying the underlying condition. The RL agent uses the visual information provided by CNN in addition to the answers to the asked questions to guide the QA system. We demonstrate that the RL-based approach increases the accuracy more than 20% compared to the CNN-only approach, which only uses the visual information to predict the condition. Moreover, the increased accuracy is up to 10% compared to the approach that uses the visual information provided by CNN along with a conventional decision tree-based QA system. We finally show that the RL-based approach not only outperforms the decision tree-based approach, but also narrows down the diagnosis faster in terms of the average number of asked questions.

Click to Read Paper and Get Code