Title: Auditing Differential Privacy Using Membership Inference Attacks

Abstract: The celebrated success of machine learning depends on the availability of a large amount of data. This data is increasingly sensitive and detailed, raising privacy concerns. Differential Privacy (DP) is considered the gold standard for privacy-preserving data analysis. With the success of DP, research results have proliferated, enabling the construction of intricate privacy-preserving data pipelines. Since DP is a theoretical constraint, a DP algorithm comes with a mathematical proof that yields a guarantee on the privacy leakage, and an implementation that runs in production. However, proofs may have mistakes, and implementations may have bugs. This raises the question of privacy auditing, i.e. whether it is possible to empirically certify the privacy of an algorithm. In this talk, we study the problem of privacy auditing in relation to privacy attacks.  We present a typical privacy audit pipeline, which runs a privacy attack and then translates the adversary’s errors into a guarantee on the privacy of the algorithm. We also revisit Membership Inference (MI) attacks, a privacy attack trying to infer whether a target point was included or not in the input of an algorithm. We design optimal MI attacks, with an application to privacy auditing in the white-box federated learning setting.

Dates

March 1st, 2025 → March 15th, 2025

Abstract submission deadline

March 8th, 2025 → March 15th, 2025

Paper submission deadline

April 14th ,2025

Accept/Reject notification

May 21-23 ,2025

Netys Conference

Proceedings

Revised selected papers will be published as a post-proceedings in Springer's LNCS "Lecture Notes in Computer Science"

Partners & Sponsors (TBA)