Trulli
      Image created by Areal Tal using a text-to-image AI generator, with additional modifications by Areal Tal, 2023.

Welcome to Leveling Up With XAI

Leveling Up With XAI is your guide through the intricate landscape of explainable artificial intelligence (XAI). The aim is to lay out the principles and practices of XAI, offering a clear path for anyone looking to understand or apply these techniques.

This platform is crafted for clarity, presenting a step-by-step approach to choosing appropriate XAI methods based on your specific needs. It's an inclusive resource for all levels of expertise - from those building their first models to experts refining their approach to AI transparency.

Step inside and discover a world where AI's decision-making is no longer obscured. Engage with the content and empower yourself to bring clarity to the forefront of AI applications.



Leveling Up With XAI serves as a comprehensive educational resource, designed to guide model developers in selecting the appropriate tools for explaining their AI models. It also assists those interested in AI by showcasing the range of available explanation tools. The guide is tailored to fit a variety of use cases and caters to varying levels of technical expertise among both the creators and users of model explanations. The focus is on aligning explanation tools with the specific needs of explanation consumers, ensuring relevance and practical applicability.

 

Emphasizing the selection of algorithms and techniques to facilitate transparency and ease of understanding in model decision-making.

Focus on methods like Feature Importance and Surrogate Models to offer stakeholders a broad picture of how a model functions and its decision-making process.

Get a detailed view of how individual features and their interactions influence outcomes.

Reveal the key influences on each prediction, providing clarity for users, especially for high-stakes decisions.

Illuminate how the training data affects a model’s decision-making process. This provides additional insights to the model's strengths, limitations, and potential biases.


Frequently Asked Questions

What programming language do I need to use to explain my models?

I tried to make this guide agnostic of any programming language. However, I personally use Python for machine learning and explaining my ML models. I do not know if all of these techniques have implementations outside of Python.

I don't have a technical background. Is this guide relevant to me?

Though this guide is tailored for use by model developers, anyone involved in the development of AI products can use this guide to learn about what explanatory techniques are out there. This way, you can be a part of the process of choosing how to explain your ML models.

Can you only explain machine learning models with these tools or can you explain models not built with machine learning?

This guide was developed for the purpose of explaining machine learning models. However, most of the tools mentioned (aside from certain example-based explanations) can be used for other models that make predictions as well with a bit of extra work.

Do you cover large language models (LLMs)?

No. This guide does not cover how to build, use, or explain large language models. For a comprehensive view of what is and what is not covered, please see the introduction of the guide.


Contact Me

Get in touch with me for more information, to discuss the contents of the website, or to discuss future potential collaborations.


About Me

I'm Ari Tal, the person behind this initiative. It might not surprise you to hear that I have a passion for Explainable AI (XAI). I like to open up an AI system to understand what the driving forces are behind how its model was formed and what its decision-making processes look like. Relying on a single metric to ensure your model is performing as expected does not feel satisfying to me.

What if your dataset is not sufficiently comprehensive? That could lead to a model that performs well for your data, but fails on unseen data or specific subpopulations - meaning you could have a model that is biased, perpetuates discrimination, or otherwise behaves unfairly.

How do you communicate model behavior with other members of your team? If you have an exceedingly complex model (as we often do when utilizing machine learning), communication about your efforts in AI development could be quite stifled. For example, your subject matter expert might never realize that the model you developed behaves entirely differently than they believe it should.

What do you do when your model has a problem or has unexpected behavior? Auditing a model, tracing how it makes its decisions, and debugging issues can be nearly impossible without tools to inspect model behavior.

These are just a few questions that come to mind when people ask me about my fascination with XAI.

Transparency paves the way for accountability and thus can potentially have downstream impacts on the fairness & equity of an AI system. That means that the ability to communicate the behavior of your models and how it reaches decisions can be a foundational necessity for responsible AI development and your ability to build trustworthy AI systems. I hope to enable you with the tools for such communication, whether you are working with a teammate to debug your model or explaining a prediction to an end user. This is why the heart of this website is a guide designed to help you select appropriate XAI techniques; so that you can cater your explanations to meet varied needs.

Join me in this exploration, and let's unravel the mysteries of AI together. Let's level up your AI with XAI!