Zhang, Yan; Chen, Lin; Tian, YiXiang - In: Risks : open access journal 14 (2026) 2, pp. 1-14
Interpretability analysis methods, such as LIME and SHAP, are widely employed to explain the predictions of artificial … intelligence models; however, they primarily function as post hoc tools and do not directly quantify the intrinsic interpretability … currently no standardized framework for evaluating interpretability as an inherent property of AI models. In this study, we …