Acknowledgements

I would like to express my heartfelt gratitude to those who have greatly contributed to my journey in Explainable Artificial Intelligence (XAI).

Firstly, a special acknowledgment goes to Christoph Molnar for his enlightening text, Interpretable Machine Learning: A Guide for Making Black Box Models Explainable. This book was not only a cornerstone of my studies but also served as a valuable resource for the "Leveling Up In XAI" group at Capital One. I am grateful to my colleagues in this group for our collaborative learning and insightful discussions.

I also extend my appreciation to the members of various XAI project teams at Capital One. Their shared expertise and cooperative efforts have significantly enhanced my practical understanding of XAI.

Additionally, I would like to thank my co-leads and fellow participants in the Applied Responsible AI initiative at Capital One. Their dedication to advancing responsible AI practices has been both inspiring and educational.

Lastly, I acknowledge the assistance of ChatGPT in structuring and refining this article. Its contributions were vital in bringing clarity and cohesiveness to the content.

Thank you all for your invaluable support and insights that have profoundly shaped my understanding and application of XAI principles.


References

Alikhademi, K., Richardson, B., Drobina, E., & Gilbert, J. E. (2021, June 14). Can explainable AI explain unfairness? A framework for evaluating explainable AI. arXiv:2106.07483. Retrieved November 2023, from https://arxiv.org/abs/2106.07483

Deck, L., Schoeffer, J., De-Arteaga, M., & Kühl, N. (2023, November 23). A critical survey on fairness benefits of XAI. arXiv:2310.13007. Retrieved December 3, 2023, from https://arxiv.org/abs/2310.13007

Hamilton, M. (2023, November 27). Partial Dependence (PDP) and Individual Conditional Expectation (ICE) plots. SynapseML Repository, GitHub. Retrieved December 3, 2023, from https://microsoft.github.io/SynapseML/docs/Explore%20Algorithms/Responsible%20AI/PDP%20and%20ICE%20Explainers

Krishna, S., Han, T., Gu, A., Pombra, J., Jabbari, S., Wu, S., & Lakkaraju, H. (2022, February 8). The disagreement problem in explainable machine learning: A practitioner's perspective. arXiv:2202.01602. Retrieved November 2023, from https://arxiv.org/abs/2202.01602

Lakkaraju, H., Kamar, E., Caruana, R., & Leskovec, J. (2019). Faithful and customizable explanations of Black Box Models. Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, 131–138. https://doi.org/10.1145/3306618.3314229

Molnar, C. (2023). Interpretable Machine Learning: A Guide for Making Black Box Models Explainable (2nd ed.). Version as of August 21, 2023, from https://christophm.github.io/interpretable-ml-book

OpenAI. (2023). ChatGPT (multiple versions) [Large language model]. https://chat.openai.com/chat

Singla, S. (2023, August 16). What are explainability AI techniques? Why do we need it? Analytics Vidhya. https://www.analyticsvidhya.com/blog/2023/03/what-are-explainability-ai-techniques-why-do-we-need-it

Trustworthy AI. IBM Research. (2021, February 9). Retrieved November 21, 2023, from https://research.ibm.com/topics/trustworthy-ai

Van Otten, N. (2023, May 26). L1 And L2 Regularization Explained, When To Use Them & Practical How To Examples. Spot Intelligence. https://spotintelligence.com/2023/05/26/l1-l2-regularization/#1_L1_regularization

Xu, T. (2021, July 19). AI makes decisions we don’t understand. That’s a problem. Built In. https://builtin.com/artificial-intelligence/ai-right-explanation