Title: From Bounds to Defenses: A Comprehensive Look at GNN Robustness

Abstract: Graph Neural Networks (GNNs) have achieved remarkable success in both academic and industrial settings, providing rich ground for theoretical analysis and setting new benchmarks on a variety of learning tasks. As their adoption grows, particularly for industrial applications, ensuring robustness under adversarial perturbations becomes increasingly critical.
In this talk, we will address these challenges by investigating the robustness of different GNN architectures. First, we introduce a theoretical framework for analyzing adversarial robustness, deriving an upper bound on model sensitivity to input perturbations. Building on these insights, we propose lightweight modifications that not only enhance robustness but also provide formal guarantees. Notably, one simple yet highly effective method injects noise into GNN hidden states, substantially improving robustness. we will also present GCORN, an iterative orthonormalization algorithm designed to maintain approximately orthonormal weight matrices within GNNs. In addition, we will discuss how hyperparameter choices, including weight initialization and the number of training epochs, significantly influence final robustness. The main aim of the talk is therefore to illuminate both the theoretical underpinnings and practical pathways to more reliable GNN models.

Dates

March 1st, 2025 → March 15th, 2025

Abstract submission deadline

March 8th, 2025 → March 15th, 2025

Paper submission deadline

April 14th ,2025

Accept/Reject notification

May 21-23 ,2025

Netys Conference

Proceedings

Revised selected papers will be published as a post-proceedings in Springer's LNCS "Lecture Notes in Computer Science"

Partners & Sponsors (TBA)