Robustness of LIME on Image Classification Models

Open Access
- Author:
- Luo, Zhuolin
- Area of Honors:
- Statistics
- Degree:
- Bachelor of Science
- Document Type:
- Thesis
- Thesis Supervisors:
- Jia Li, Thesis Supervisor
Priyangi Kanchana Bulathsinhala, Thesis Honors Advisor - Keywords:
- Explainable Artificial Intelligence
Convolutional Neural Network
Image Classification
LIME
Robustness of LIME
Trustworthiness of AI
Interpretability - Abstract:
- The world's leading technology and profitability-focused companies have been racing to improve their core deep neural network models in the past decade. Scientists and regulators show optimism about the development prospects of deep convolutional neural network models for image classification. Despite this, questions remain about the interpretability of the models for wider adoption. Recently explainable AI techniques, such as LIME, can provide explanations behind individual results. However, can we trust the explanations produced by LIME? This paper is to have a quantitative understanding of how robust the explanation model is. And we focus on LIME explanations based on pre-trained models on the CIFAR10, including VGG16, VGG19, ResNet34, ResNet50, DenseNet121, and DenseNet169. By comparing the LIME explanations by each pair of the same-architecture model, this paper evaluates the robustness level within model architecture, mainly in response to variation in the number of layers. Further, hypothesis testing is performed to see whether the LIME robustness level is different or not among different architectures. At the end of the experiment, LIME robustness is proven to vary from each other between different model architectures. And it is noteworthy that the LIME explanations can capture dissimilar features to explain the same individual result predicted by different CNN models, leading to significant problems in practice, especially in the medical healthcare field.