来源:Bridging the Trust Gap: Clinician-Validated Hybrid Explainable AI for Maternal Health Risk Assessment in Bangladesh

---

中文摘要 #

本研究提出了一个创新的混合可解释人工智能(XAI)框架,用于解决医疗资源受限环境下的母婴健康风险预测问题。该框架结合了前置模糊逻辑和后置SHAP解释方法,并通过临床医生的系统性反馈进行验证。研究团队基于1,014份母婴健康记录开发了一个模糊-XGBoost模型,实现了88.67%的准确率和0.9703的ROC-AUC值。在孟加拉国14位医疗专业人员参与的验证研究中,71.4%的临床案例显示对混合解释方法的强烈偏好,54.8%的参与者表示信任其临床应用。SHAP分析表明医疗服务可及性是主要预测因素,而工程化的模糊风险评分排名第三,验证了临床知识的有效整合。这项研究证明,将可解释的模糊规则与特征重要性解释相结合可以同时提升实用性和可信度,为母婴医疗保健中的XAI部署提供了实践指导。

**关键词:**可解释人工智能、母婴健康、模糊逻辑、机器学习、医疗风险评估


English Summary #

Bridging the Trust Gap: Clinician-Validated Hybrid Explainable AI for Maternal Health Risk Assessment in Bangladesh

This study introduces an innovative hybrid explainable AI (XAI) framework addressing maternal health risk prediction in resource-constrained settings. The framework combines ante-hoc fuzzy logic with post-hoc SHAP explanations, validated through systematic clinician feedback. The research team developed a fuzzy-XGBoost model based on 1,014 maternal health records, achieving 88.67% accuracy and a ROC-AUC of 0.9703. A validation study involving 14 healthcare professionals in Bangladesh demonstrated a strong preference for hybrid explanations (71.4% across clinical cases), with 54.8% expressing trust in clinical applications. SHAP analysis identified healthcare access as the primary predictor, while the engineered fuzzy risk score ranked third, validating the integration of clinical knowledge. The study reveals that combining interpretable fuzzy rules with feature importance explanations enhances both utility and trust, providing practical insights for XAI deployment in maternal healthcare. Clinicians valued the integrated clinical parameters but identified critical gaps in obstetric history, gestational age, and connectivity barriers.

**Keywords: **Explainable AI, Maternal Health, Fuzzy Logic, Machine Learning, Healthcare Risk Assessment