Python Libraries for Explainable AI
My Books
Cloud Computing
Programming Languages
Digital Technologies
Integration Technologies
01.
SHAP (SHapley Additive exPlanations) is a game theoretic approach to explain the output of any machine learning model.
02.
Local Interpretable Model-Agnostic Explanations (LIME), is an algorithm that can explain the predictions of any classifier or regressor in a faithful way, by approximating it locally with an interpretable model.
03.
ELI5 is a Python library which allows to visualize and debug various Machine Learning models using unified API.
04.
Shapash is a Python library dedicated to the interpretability of Data Science models.
05.
Anchors is a method for generating human-interpretable rules that can be used to explain the predictions of a machine learning model.
06.
XAI is a Machine Learning library that is designed with AI explainability in its core.
07.
BreakDown is a tool that can be used to explain the predictions of linear models. It works by decomposing the model's output into the contribution of each input feature.
08.
interpret-text - A library that incorporates state-of-the-art explainers for text-based machine learning models and visualizes the result with a built-in dashboard.
09.
AI Explainability 360 - This extensible open source toolkit can help you comprehend how machine learning models predict labels by various means throughout the AI application lifecycle.
10.
OmniXAI (short for Omni eXplainable AI), addresses several problems with interpreting judgments produced by machine learning models in practice.
AI
Kubernetes
Blockchain
Cloud-native
Serverless
Software Engineering
Edge Computing