Evaluating explainable artificial intelligence methods for multi-label deep learning classification tasks in remote sensing.

The work titled “Evaluating explainable artificial intelligence methods for multi-label deep learning classification tasks in remote sensing” has been published in the Journal "International Journal of Applied Earth Observation and Geoinformation". 

Part of this work was supported by NEANIAS and during the 3rd software release we plan to integrate some of the software in U3.

  • Kakogeorgiou, I [1]., Karantzalos, K [2]., 2021. Evaluating explainable artificial intelligence methods for multi-label deep learning classification tasks in remote sensing. International Journal of Applied Earth Observation and Geoinformation 103, 102520. https://doi.org/10.1016/j.jag.2021.102520.
  • [1] Remote Sensing Laboratory, National Technical University of Athens, Zographou 15780, Greece; [2] Athena Research Center, Athens, Greece.

Abstract

Although deep neural networks hold the state-of-the-art in several remote sensing tasks, their black-box operation hinders the understanding of their decisions, concealing any bias and other shortcomings in datasets and model performance. To this end, we have applied explainable artificial intelligence (XAI) methods in remote sensing multi-label classification tasks towards producing human-interpretable explanations and improve transparency. In particular, we utilized and trained deep learning models with state-of-the-art performance in the benchmark BigEarthNet and SEN12MS datasets. Ten XAI methods were employed towards understanding and interpreting models’ predictions, along with quantitative metrics to assess and compare their performance.

Numerous experiments were performed to assess the overall performance of XAI methods for straightforward prediction cases, competing multiple labels, as well as misclassification cases. According to our findings, Occlusion, Grad-CAM and Lime were the most interpretable and reliable XAI methods. However, none delivers high resolution outputs, while apart from Grad-CAM, both Lime and Occlusion are computationally expensive. We also highlight different aspects of XAI performance and elaborate with insights on black-box decisions in order to improve transparency, understand their behavior and reveal, as well, datasets’ particularities.

Acknowledgments

This work has been partially supported by NEANIAS, funded by the European Union’s Horizon 2020 research and innovation programme, under grant agreement No 863448. Part of this work was also funded by the Operational Program “Competitiveness, Entrepreneurship and Innovation 2014-2020” (co-funded by the European Regional Development Fund) under the project Τ6ΥВΠ-00153 ‘SOSAME’.

Keywords

Interpretability, Explainability, Deep neural networks, XAI, Black-box models, BigEarthNet, SEN12MS

 

You can get the article at: International Journal of Applied Earth Observations and Geoinformation.

 

EU Flag  NEANIAS is a Research and Innovation Action funded by European Union under Horizon 2020 research and innovation programme via grant agreement No.863448.