I would like to express my heartfelt gratitude to those who have greatly contributed to my journey in Explainable Artificial Intelligence (XAI).

Firstly, a special acknowledgment goes to Christoph Molnar for his enlightening text, Interpretable Machine Learning: A Guide for Making Black Box Models Explainable. This book was not only a cornerstone of my studies but also served as a valuable resource for the "Leveling Up In XAI" group at Capital One. I am grateful to my colleagues in this group for our collaborative learning and insightful discussions.

I also extend my appreciation to the members of various XAI project teams at Capital One. Their shared expertise and cooperative efforts have significantly enhanced my practical understanding of XAI.

Additionally, I would like to thank my co-leads and fellow participants in the Applied Responsible AI initiative at Capital One. Their dedication to advancing responsible AI practices has been both inspiring and educational.

Lastly, I acknowledge the assistance of ChatGPT in structuring and refining this article. Its contributions were vital in bringing clarity and cohesiveness to the content.

Thank you all for your invaluable support and insights that have profoundly shaped my understanding and application of XAI principles.


Alikhademi, K., Richardson, B., Drobina, E., & Gilbert, J. E. (2021, June 14). Can explainable AI explain unfairness? A framework for evaluating explainable AI. arXiv:2106.07483. Retrieved November 2023, from

Deck, L., Schoeffer, J., De-Arteaga, M., & Kühl, N. (2023, November 23). A critical survey on fairness benefits of XAI. arXiv:2310.13007. Retrieved December 3, 2023, from

Hamilton, M. (2023, November 27). Partial Dependence (PDP) and Individual Conditional Expectation (ICE) plots. SynapseML Repository, GitHub. Retrieved December 3, 2023, from

Krishna, S., Han, T., Gu, A., Pombra, J., Jabbari, S., Wu, S., & Lakkaraju, H. (2022, February 8). The disagreement problem in explainable machine learning: A practitioner's perspective. arXiv:2202.01602. Retrieved November 2023, from

Lakkaraju, H., Kamar, E., Caruana, R., & Leskovec, J. (2019). Faithful and customizable explanations of Black Box Models. Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, 131–138.

Molnar, C. (2023). Interpretable Machine Learning: A Guide for Making Black Box Models Explainable (2nd ed.). Version as of August 21, 2023, from

OpenAI. (2023). ChatGPT (multiple versions) [Large language model].

Singla, S. (2023, August 16). What are explainability AI techniques? Why do we need it? Analytics Vidhya.

Trustworthy AI. IBM Research. (2021, February 9). Retrieved November 21, 2023, from

Van Otten, N. (2023, May 26). L1 And L2 Regularization Explained, When To Use Them & Practical How To Examples. Spot Intelligence.

Xu, T. (2021, July 19). AI makes decisions we don’t understand. That’s a problem. Built In.