Refine your search
Collections
Co-Authors
Year
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z All
Venkatsubramaniam, Bhaskaran
- Comparative Study of Xai Using Formal Concept Lattice and Lime
Abstract Views :66 |
PDF Views:2
Authors
Affiliations
1 Master of Math and Computer Science, Sri Sathya Sai Institute of Higher Learning, IN
1 Master of Math and Computer Science, Sri Sathya Sai Institute of Higher Learning, IN
Source
ICTACT Journal on Soft Computing, Vol 13, No 1 (2023), Pagination: 2782-2791Abstract
Local Interpretable Model Agnostic Explanation (LIME) is a technique to explain a black box machine learning model using a surrogate model approach. While this technique is very popular, inherent to its approach, explanations are generated from the surrogate model and not directly from the black box model. In sensitive domains like healthcare, this need not be acceptable as trustworthy. These techniques also assume that features are independent and provide feature weights of the surrogate linear model as feature importance. In real life datasets, features may be dependent and a combination of a set of features with their specific values can be the deciding factor rather than individual feature importance. They also generate random instances around the point of interest to fit the surrogate model. These random instances need not be part of the original source or may even turn out to be meaningless. In this work, we compare LIME to explanations from the formal concept lattice. This does not use a surrogate model but a deterministic approach by generating synthetic data that respects implications in the original dataset and not randomly generating it. It obtains crucial feature combinations with their values as decision factors without presuming dependence or independence of features. Its explanations not only cover the point of interest but also global explanation of the model, similar and contrastive examples around the point of interest. The explanations are textual and hence easier to comprehend than comprehending weights of a surrogate linear model to understand the black box model.Keywords
Explainable AI, XAI, Formal Concept Analysis, Lattice for XAI, Deterministic methods for XAIReferences
- C. Rudin, “Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and use Interpretable Models Instead”, Nature Machine Intelligence, Vol. 1, No. 5, pp. 206-215, 2019.
- Alejandro Barredo Arrieta, Natalia Diaz-Rodriguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador Garcia, Sergio Gil-Lopez, Daniel Molina, Richard Benjamins, Raja Chatila and Francisco Herrera, “Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward responsible AI”, Information Fusion, Vol. 58, pp. 82-115, 2020.
- M.T. Ribeiro, S. Singh and C. Guestrin, “Why Should I Trust You? : Explaining the Predictions of Any Classifier”, Proceedings of International Conference on Knowledge Discovery and Data Mining, pp. 1135-1144, 2016.
- Alvarez-Melis, David and Tommi S. Jaakkola, “On the Robustness of Interpretability Methods”, Proceedings of International Conference on Machine Learning, pp. 1-7, 2018.
- G. Visani, Enrico Bagli, Federico Chesani, Alessandro Poluzzi and Davide Capuzzo, “Statistical Stability Indices for LIME: Obtaining Reliable Explanations for Machine Learning Models”, Journal of the Operational Research Society, Vol. 73, No. 1, pp. 91-101, 2022.
- Marzyeh Ghassemi, Luke Oakden Rayner and Andrew L Beam, “The False Hope of Current Approaches to Explainable Artificial Intelligence in Healthcare”, The Lancet Digital Health, Vol. 3, No. 11, pp. 745-750, 2021.
- S.M. Lundberg and S.I. Lee, “A Unified Approach to Interpreting Model Predictions, Advances in Neural Information Processing Systems, Vol. 30, pp. 4765-4774, 2017.
- R.R. Selvaraju, A. Das and D. Batra, “Grad-CAM: Why did you say that?”, Proceedings of International Conference on Knowledge Discovery and Data Mining, pp. 354-356, 2016.
- D. Smilkov, N. Thorat, B. Kim and M. Wattenberg, “SmoothGrad: Removing Noise by Adding Noise”, Proceedings of International Conference on Knowledge Discovery and Data Mining, pp. 1-8, 2017.
- J.T. Springenberg and M.A. Riedmiller, “Striving for Simplicity: the all Convolutional Net”, Proceedings of International Workshop on Information Communications, pp. 1-6, 2015.
- M.L. Leavitt and A. Morcos, “Towards Falsifiable Interpretability Research”, Proceedings of International Workshop on Neural Information Processing Systems, pp. 98-104, 2020.
- M. Sundararajan, A, Taly and Q. Yan, “Axiomatic Attribution for Deep Networks”, Proceedings of International Conference on Machine Learning, pp. 3319-3328, 2017.
- J. Adebayo, J. Gilmer, M. Muelly and B. Kim, “Sanity Checks for Saliency Maps”, Proceedings of International Conference on Neurocomputing, pp. 9525-9536, 2018.
- Venkatsubramaniam Bhaskaran and Pallav Kumar Baruah, “A Novel Approach to Explainable AI Using Formal Concept Lattice”, International Journal of Innovative Technology and Exploring Engineering, Vol. 11, No. 7, pp. 1-17, 2022.
- A. Sangroya, M. Rastogi and L. Vig, “Guided-LIME: Structured Sampling based Hybrid Approach towards Explaining Blackbox Machine Learning Models”, Proceedings of International Workshop on Computational Intelligence, pp. 1-17, 2020.
- A. Sangroya, C. Anantaram, M. Rawat and M. Rastogi, “Using Formal Concept Analysis to Explain Black Box Deep Learning Classification Models”, Proceedings of International Workshop on Machine Learning, pp. 19-26, 2019.
- UCI, “UC Irvine Machine Learning Repository”, Available at: https://archive.ics.uci.edu/ml/index.php, Accessed at 2022.
- R. Wille, “Concept Lattices and Conceptual Knowledge Systems”, Computers and Mathematics with Applications, Vol. 23, pp. 493-515, 1992.
- UCI, “UCI Car Evaluation Data Set”, Available at: https://archive.ics.uci.edu/ml/datasets/Car+Evaluation, Accessed at 2022.
- XAI Using Formal Concept Lattice for Image Data
Abstract Views :83 |
PDF Views:0
Authors
Affiliations
1 Department of Math and Computer Science, Sri Sathya Sai Institute of Higher Learning, India., IN
1 Department of Math and Computer Science, Sri Sathya Sai Institute of Higher Learning, India., IN
Source
ICTACT Journal on Image and Video Processing, Vol 13, No 3 (2023), Pagination: 2904-2913Abstract
A Formal concept lattice can be used to generate explanations from a black box model. This novel approach has been applied and proven for tabular data. It has also been compared to popular techniques in XAI. In this work, we apply the approach to image data. Image data, in general, comprises large dimensions and hence poses a challenge to build a formal concept lattice from such large data. We break the image into parts and build multiple sub-lattices. Using the combination of sub-lattice explanations, we generate the complete explanation for the entire image. We present our work beginning with a simple synthetic dataset for providing an intuitive idea of explanations and its credibility. This is followed by explanations of a model built on the popular MNIST dataset proving consistency of explanations on a real dataset. Text explanations from the lattice are converted to images for ease of visual understanding. We compare our work with DeepLIFT by viewing image masks obtained through contrastive explanation for specific digits from the MNIST dataset. This work proves the feasibility of using formal concept lattices for image data.Keywords
Explainable AI, XAI, Formal Concept Analysis, Lattice for XAI, XAI for Images.References
- C. Rudin, “Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead”, Nature Machine Intelligence, Vol. 1, No. 5, pp. 206-215, 2019.
- Alejandro Barredo Arrieta, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador Garcia, Sergio GilLopez, Daniel Molina, Richard Benjamins, Raja Chatila and Francisco Herrera, “Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI”, Information Fusion, Vol. 58, pp. 82-115, 2020.
- M.T. Ribeiro and C. Guestrin “Why Should I Trust You?: Explaining the Predictions of Any Classifier”, Proceedings of ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135-1144, 2016.
- Alvarez-Melis and Tommi S. Jaakkola, “On the Robustness of Interpretability Methods”, arXiv preprint arXiv:1806.08049, 2018.
- G. Visani, Alessandro Poluzzi and Davide Capuzzo, “Statistical Stability Indices for LIME: Obtaining Reliable Explanations for Machine Learning Models”, Journal of the Operational Research Society, Vol. 73, No. 1, pp. 91-101, 2022.
- Marzyeh Ghassemi, Luke Oakden-Rayner and Andrew L Beam, “The False Hope of Current Approaches to Explainable Artificial Intelligence in Healthcare”, The Lancet Digital Health, Vol. 3, No. 11, pp. 745-750, 2021.
- S.M. Lundberg and S.I. Lee, “A Unified Approach to Interpreting Model Predictions”, Advances in Neural Information Processing Systems, Vol. 30, pp. 4765-4774, 2017.
- R.R. Selvaraju and D. Batra, “Grad-CAM: Why did you say that?”, CoRR abs/1611.07450, pp.1-13, 2016.
- D. Smilkov and M. Wattenberg, “SmoothGrad: Removing Noise by Adding Noise”, CoRR abs/1706.03825, pp. 1-12, 2017.
- J.T. Springenberg and M.A. Riedmiller, “Striving for Simplicity: The All-Convolutional Net”, Proceedings of International Conference on Machine Learning, 2015.
- M.L. Leavitt and A. Morcos, “Towards falsifiable interpretability research”, Proceedings of International Conference on Neural Information Processing Systems ML Retrospectives, Surveys and Meta-Analyses, pp. 1-13, 2020.
- M. Sundararajan and Q. Yan, “Axiomatic Attribution for Deep Networks”, Proceedings of International Conference on Machine Learning, pp. 3319-3328, 2017.
- J. Adebayo and B. Kim, “Sanity Checks for Saliency Maps”, Proceedings of International Conference on Neural Computing, pp. 9525-9536, 2018.
- Venkatsubramaniam Bhaskaran and Pallav Kumar Baruah, “A Novel Approach to Explainable AI Using Formal Concept Lattice”, International Journal of Innovative Technology and Exploring Engineering, Vol. 11, No. 7, pp. 36-48, 2022.
- A. Sangroya and L. Vig, “Guided-LIME: Structured Sampling based Hybrid Approach towards Explaining Blackbox Machine Learning Models”, Proceedings of International Conference on Machine Learning, pp. 1-16, 2020.
- A. Sangroya and M. Rastogi, “Using Formal Concept Analysis to Explain Black Box Deep Learning Classification Models”, Proceedings of International Conference on Artificial Intelligence, pp. 19-26, 2019.
- UCI, “UC Irvine Machine Learning Repository”, Available at: https://archive.ics.uci.edu/ml/index.php, Accessed at 2022.
- R. Wille, “Concept Lattices and Conceptual Knowledge Systems”, Computers and Mathematics with Applications, 1992.
- UCI, “UCI Car Evaluation DataSet”, Available at: https://archive.ics.uci.edu/ml/datasets/Car+Evaluation, Accessed at 2022.
- Avanti Shrikumar, Peyton Greenside and Anshul Kundaje, “Learning Important Features through propagating Activation Differences”, Proceedings of International Conference on Machine Learning, pp. 3145-3153, 2017.
- Jianqing Fan, Cong Ma and Yiqiao Zhong, “A Selective Overview of Deep Learning”, Proceedings of International Conference on Machine Learning, pp. 98-104, 2019.
- Laurens Van Der Maaten and Geoffrey Hinton, “Visualizing Data using t-SNE”, Journal of Machine Learning Research, Vol.12, No. 2, pp. 1-15, 2008.
- Ross Girshick, Jeff Donahue, Trevor Darrell and Jitendra Malik, “Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation”, Proceedings of International Conference on Machine Learning, pp. 1-5, 2014.
- Matthew D. Zeiler and Rob Fergus, “Visualizing and Understanding Convolutional Networks”, Proceedings of European Conference on Computer Vision, pp. 1-8, 2014.
- Karen Simonyan, Andrea Vedaldi and Andrew Zisserman, “Deep Inside Convolutional Network: Visualizing image classification models and Saliency Maps”, Proceedings of International Conference on Machine Learning, pp. 1-9, 2014.
- B. Zhou, A. Khosla, A. Lapedriza, A. Oliva and A. Torralba, “Learning Deep Features for Discriminative Localization”, Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition, pp. 2921-2929, 2016.
- R.R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh and D. Batra, “Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization”, Proceedings of IEEE International Conference on Computer Vision, pp. 618-626, 2017.
- A. Chattopadhay, A. Sarkar, P. Howlader and V.N. Balasubramanian, "Grad-CAM++: Generalized GradientBased Visual Explanations for Deep Convolutional Networks”, Proceedings of IEEE Winter Conference on Applications of Computer Vision, pp. 839-847, 2018.
- Daniel Smilkov, Nikhil Thorat, Been Kim, Fernanda B. Viegas and Martin Wattenberg, “SmoothGrad: Removing Noise by Adding Noise”, CoRR abs/1706.03825, pp.1-9, 2017.
- Been Kim, Martin Wattenberg, Justin Gilmer, Carrie Cai, James Wexler and Fernanda Viegas. “Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)”, Proceedings of International Conference on Machine Learning, pp. 2668- 2677. 2018.
- Evaluation of Lattice Based XAI
Abstract Views :40 |
PDF Views:1
Authors
Affiliations
1 Department of Math and Computer Science, Sri Sathya Sai Institute of Higher Learning, IN
1 Department of Math and Computer Science, Sri Sathya Sai Institute of Higher Learning, IN
Source
ICTACT Journal on Soft Computing, Vol 14, No 2 (2023), Pagination: 3180-3187Abstract
With multiple methods to extract explanations from a black box model, it becomes significant to evaluate the correctness of these Explainable AI (XAI) techniques themselves. While there are many XAI evaluation methods that need manual intervention, in order to be objective, we use computable XAI evaluation methods to test the basic nature and sanity of an XAI technique. We pick four basic axioms and three sanity tests from existing literature that the XAI techniques are expected to satisfy. Axioms like Feature Sensitivity, Implementation Invariance, Symmetry preservation and sanity tests like Model parameter randomization, Model-Outcome relationship, Input transformation invariance are used. After reviewing the axioms and sanity tests, we apply it on existing XAI techniques to check if they satisfy them or not. Thereafter, we evaluate our lattice based XAI technique with these axioms and sanity tests using a mathematical approach. This work proves these axioms and sanity tests to establish the correctness of explanations extracted from our Lattice based XAI technique.Keywords
Explainable AI, XAI, Formal Concept Analysis, Lattice for XAI, XAI Evaluation.References
- M.T. Ribeiro, S. Singh and C. Guestrin, “Why Should I Trust You?: Explaining the Predictions of Any Classifier”, Proceedings of International Conference on Knowledge Discovery and Data Mining, pp. 1135-1144, 2016.
- R. Selvaraju, R. Ramprasaath, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh and Dhruv Batra, “Grad-Cam: Visual Explanations from Deep Networks via Gradient-based Localization”, Proceedings of the IEEE International Conference on Computer Vision, pp. 618-626, 2017.
- D. Smilkov and M. Wattenberg, “SmoothGrad: Removing Noise by Adding Noise”, Proceedings of the IEEE International Conference on Computer Vision, pp. 18-26, 2017.
- R. Wille, “Concept Lattices and Conceptual Knowledge Systems”, Computers and Mathematics with Applications, Vol. 23, pp. 493-515, 1992.
- Bhaskaran Venkatsubramaniam and Pallav Kumar Baruah, “A Novel Approach to Explainable AI using Formal Concept Lattice”, International Journal of Innovative Technology and Exploring Engineering, Vol. 11, No. 7, pp. 1-13, 2022.
- Bhaskaran Venkatsubramaniam and Pallav Kumar Baruah, “XAI using Formal Concept Lattice for Image Data”, ICTACT Journal on Image and Video Processing, Vol 13, No. 3, pp. 2904-2913, 2023.
- Bhaskaran Venkatsubramaniam and Pallav Kumar Baruah, “Comparative Study OF XAI using Formal Concept Lattice and LIME”, ICTACT Journal on Soft Computing, Vol 13, No. 1, pp. 2782-2791, 2022.
- C. Rudin, “Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and use Interpretable Models Instead”, Nature Machine Intelligence, Vol. 1, No. 5, pp. 206-215, 2019.
- Alejandro Barredo Arrieta, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador Garcia, Sergio Gil-Lopez, Daniel Molina, Richard Benjamins, Raja Chatila and Francisco Herrera, “Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges Toward Responsible AI”, Information Fusion, Vol. 58, pp. 82-115, 2020.
- O. Biran and C. Cotton, “Explanation and Justification in Machine Learning: A Survey”, Proceedings of Workshop on Explainable Artificial Intelligence, pp. 1-6, 2017. [11] R.R. Hoffman and Jordan Litman, “Metrics for Explainable AI: Challenges and Prospects”, Proceedings of the IEEE International Conference on Computer Vision, pp. 1-14, 2018.
- Sina Mohseni, Niloofar Zarei and Eric D. Ragan, “A Multidisciplinary Survey and Framework for Design and Evaluation of Explainable AI Systems”, ACM Transactions on Interactive Intelligent Systems, Vol. 11, No. 3-4, pp. 1-45, 2021.
- Andrew Slavin Ross, Michael C. Hughes and Finale Doshi-Velez, “Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations”, Proceedings of International Joint Conference on Artificial Intelligence, pp. 2662-2670, 2017.
- S.M. Lundberg and S.I. Lee, “A Unified Approach to Interpreting Model Predictions”, Advances in Neural Information Processing Systems, Vol. 30, pp. 4765-4774, 2017. [15] Avanti Shrikumar, Peyton Greenside and Anshul Kundaje, “Learning Important Features through Propagating Activation Differences”, Proceedings of International Joint Conference on Machine Learning, pp. 3145-3153, 2017.
- Oliver Zhang, Randall J. Lee, Yiran Chen and Xiao Hu, “Explainability Metrics of Deep Convolutional Networks for Photoplethysmography Quality Assessment”, IEEE Access, Vol. 9, pp. 29736-29745, 2021.
- M. Sundararajan and Q. Yan, “Axiomatic Attribution for Deep Networks”, Proceedings of International Conference on Machine Learning, pp. 3319-3328, 2017.
- J. Adebayo and B. Kim, “Sanity Checks for Saliency Maps”, Proceedings of International Joint Conference on Artificial Intelligence, pp. 9525-9536, 2018.
- M.D. Zeiler and Rob Fergus, “Visualizing and Understanding Convolutional Networks”, Proceedings of International Joint Conference on Computer Vision, pp. 818-833, 2014.
- Springenberg, Jost Tobias, Alexey Dosovitskiy, Thomas Brox, and Martin Riedmiller, “Striving for Simplicity: The all Convolutional Net.”, Proceedings of International Joint Conference on Computer Vision, pp. 1-8, 2014.
- Sebastian Bach, Frederick Klauschen, Klaus-Robert Muller and Wojciech Samek, “On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation”, PloS One, Vol. 10, No. 7, pp. 1-12, 2015.