Showing 1 - 10 of 369
Black box machine learning models are currently being used for high-stakes decision making in various parts of society such as healthcare and criminal justice. While tree-based ensemble methods such as random forests typically outperform deep learning models on tabular data sets, their built-in...
Persistent link: https://www.econbiz.de/10015404251
Despite the popularity of feature importance (FI) measures in interpretable machine learning, the statistical adequacy of these methods is rarely discussed. From a statistical perspective, a major distinction is between analysing a variable’s importance before and after adjusting for...
Persistent link: https://www.econbiz.de/10015404263
Despite the popularity of feature importance (FI) measures in interpretable machine learning, the statistical adequacy of these methods is rarely discussed. From a statistical perspective, a major distinction is between analysing a variable's importance before and after adjusting for...
Persistent link: https://www.econbiz.de/10015187795
Persistent link: https://www.econbiz.de/10015069592
Nowadays, artificial intelligence (AI) systems make predictions in numerous high stakes domains, including credit-risk assessment and medical diagnostics. Consequently, AI systems increasingly affect humans, yet many state-of-the-art systems lack transparency and thus, deny the individual's...
Persistent link: https://www.econbiz.de/10015272864
Persistent link: https://www.econbiz.de/10013556708
Persistent link: https://www.econbiz.de/10014486545
Persistent link: https://www.econbiz.de/10015196768
In this article, a new kind of interpretable machine learning method is presented, which can help to understand the partition of the feature space into predicted classes in a classification model using quantile shifts, and this way make the underlying statistical or machine learning model more...
Persistent link: https://www.econbiz.de/10015359564
Persistent link: https://www.econbiz.de/10013555383