17th AIAI 2021, 25 - 27 June 2021, Greece

A Lipschitz - Shapley Explainable Defense Methodology Against Adversarial Attacks

Konstantinos Demertzis, Panagiotis Kikiras, Lazaros Iliadis

Abstract:

  Every learning algorithm, has a specific bias. This may be due to the choice of its hyperparameters, to the characteristics of its classification methodology, or even to the representation approach of the considered information. As a result, Machine Learning modeling algorithms are vulnerable to specialized attacks. Moreover, the training datasets are not always an accurate image of the real world. Their selection process and the assumption that they have the same dis-tribution as all the unknown cases, introduce another level of bias. Global and Local Interpretability (GLI) is a very important process that allows the determi-nation of the right architectures to solve Adversarial Attacks (ADA). It contrib-utes towards a holistic view of the Intelligent Model, through which we can de-termine the most important features, we can understand the way the decisions are made and the interactions between the involved features. This research pa-per, introduces the innovative hybrid Lipschitz - Shapley approach for Explain-able Defence Against Adversarial Attacks. The introduced methodology, em-ploys the Lipschitz constant and it determines its evolution during the training process of the intelligent model. The use of the Shapley Values, offers clear ex-planations for the specific decisions made by the model.  

*** Title, author list and abstract as seen in the Camera-Ready version of the paper that was provided to Conference Committee. Small changes that may have occurred during processing by Springer may not appear in this window.