CompSci & AI Advances

From the Journal:

CompSci & AI Advances

Volume 2, Issue 1 (March 2025)


A Deep Reinforcement Learning Framework with Explainable AI for Personalized and Interpretable Treatment Recommendations in Healthcare

T. Thangarasan, M. Devika, C. Sincija, Khushboo Tripathi, P. Logamurthy, Kai Song, Mei Bie, Jie Yang

T. Thangarasan 1,*

M. Devika 2

C. Sincija 3

Khushboo Tripathi 4

P. Logamurthy 5

Kai Song 3,4

Mei Bie6

Jie Yang7,*

1 Hindusthan Institute of Technology, Anna university, Chennai, Tamilnadu, India

2 Department of Computer Science And Engineering, SRM Institute of Science and Technology, Ramapuram, Chennai – 600089. India

3 Department of Computer Science and Engineering, Dhanalakshmi Srinivasan College of Engineering, Coimbatore, Tamilnadu, India

4 Sharda School of Engineering and Technology, Sharda University, Greater Noida, India
5 Department of Electronics and Communication Engineering, Nandha Engineering College, Erode, Tamilnadu, India

6 Institute of Education, Changchun Normal University, Changchun 130032, China

7 College of Artificial Intelligence, Chongqing Industry and Trade Polytechnic, China

* Author to whom correspondence should be addressed:

thangaforever@gmail.com (T. Thangarasan)

ABSTRACT

The integration of Explainable Artificial Intelligence (XAI) into healthcare has significantly advanced clinical decision-making by enhancing the transparency and trustworthiness of AI-driven recommendations. This study introduces a novel Deep Reinforcement Learning (DRL) framework designed to generate personalized treatment recommendations tailored to individual patient profiles. The framework combines Deep Q-Learning and Policy Gradient methods to dynamically model and optimize treatment pathways, utilizing historical clinical data, patient demographics, and treatment response patterns. To ensure interpretability, an explainability layer incorporating SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-Agnostic Explanations) provides clinicians with actionable insights into the model’s decision-making process. The proposed framework was rigorously evaluated on a real-world dataset comprising 50,000 electronic health records (EHRs) from patients with cardiovascular disease and diabetes. Experimental results demonstrated a 28% improvement in treatment success rates, a 35% reduction in adverse effects, and a 20% increase in clinician acceptance compared to conventional rule-based methods. Additionally, the explainability module achieved an average accuracy of 92% in attributing model decisions to key patient features, reinforcing its reliability in clinical settings. These findings underscore the potential of the DRL-XAI framework to enhance patient outcomes while fostering trust in AI-assisted healthcare systems. By balancing predictive accuracy with interpretability, this approach addresses critical challenges in AI adoption, paving the way for more transparent and personalized clinical decision support tools. Future research will focus on extending the framework to additional medical conditions and integrating multi-modal patient data for broader applicability.

Significance of the Study:

This study introduces a Deep Reinforcement Learning (DRL) framework with Explainable AI (XAI) to enhance personalized treatment recommendations in healthcare. By combining Deep Q-Learning and Policy Gradient methods with SHAP and LIME for interpretability, the framework improves treatment success rates by 28%, reduces adverse effects by 35%, and increases clinician acceptance by 20%. The model’s 92% accuracy in explainability ensures trustworthy AI-driven decisions, addressing critical challenges in clinical adoption and paving the way for transparent, patient-centric AI in medicine.

Summary of the Study:

The study proposes a DRL-XAI framework for personalized treatment recommendations, integrating reinforcement learning with SHAP and LIME for interpretability. Evaluated on 50,000 EHRs, it outperformed rule-based methods, boosting treatment success (28%), reducing side effects (35%), and improving clinician trust (20%). The 92% accurate explainability layer ensures model transparency. Future work includes multi-modal data integration and federated learning for broader healthcare applications. This framework advances ethical, interpretable AI in medicine, supporting better clinical decision-making and patient outcomes.