Canonical foliations of neural networks: application to robustness - Archive ouverte HAL Accéder directement au contenu
Pré-Publication, Document De Travail Année :

Canonical foliations of neural networks: application to robustness

(1) , (2, 3) , (2)
1
2
3

Résumé

Adversarial attack is an emerging threat to the trustability of machine learning. Understanding these attacks is becoming a crucial task. We propose a new vision on neural network robustness using Riemannian geometry and foliation theory, and create a new adversarial attack by taking into account the curvature of the data space. This new adversarial attack called the "dog-leg attack" is a two-step approximation of a geodesic in the data space. The data space is treated as a (pseudo) Riemannian manifold equipped with the pullback of the Fisher Information Metric (FIM) of the neural network. In most cases, this metric is only semi-definite and its kernel becomes a central object to study. A canonical foliation is derived from this kernel. The curvature of the foliation's leaves gives the appropriate correction to get a two-step approximation of the geodesic and hence a new efficient adversarial attack. Our attack is tested on a toy example, a neural network trained to mimic the Xor function, and demonstrates better results that the state of the art attack presented by Zhao et al. (2019).
Fichier principal
Vignette du fichier
FIM_foliation_and_neural_networks_attacks.pdf (693.68 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-03593479 , version 1 (02-03-2022)

Identifiants

  • HAL Id : hal-03593479 , version 1

Citer

Eliot Tron, Nicolas Couellan, Stéphane Puechmorel. Canonical foliations of neural networks: application to robustness. 2022. ⟨hal-03593479⟩
68 Consultations
23 Téléchargements

Partager

Gmail Facebook Twitter LinkedIn More