Exploring the Unfairness of DP-SGD Across Settings
Research output: Contribution to conference › Paper › Research
Documents
- Fulltext
Final published version, 489 KB, PDF document
End users and regulators require private and fair artificial intelligence models, but previous work suggests these objectives may be at odds. We use the CivilComments to evaluate the impact of applying the {\em de facto} standard approach to privacy, DP-SGD, across several fairness metrics. We evaluate three implementations of DP-SGD: for dimensionality reduction (PCA), linear classification (logistic regression), and robust deep learning (Group-DRO). We establish a negative, logarithmic correlation between privacy and fairness in the case of linear classification and robust deep learning. DP-SGD had no significant impact on fairness for PCA, but upon inspection, also did not seem to lead to private representations.
Original language | English |
---|---|
Publication date | 2022 |
Number of pages | 6 |
Publication status | Published - 2022 |
Event | Third AAAI Workshop on Privacy-Preserving Artificial Intelligence (PPAI-22) - VIRTUAL Duration: 28 Feb 2022 → … |
Conference
Conference | Third AAAI Workshop on Privacy-Preserving Artificial Intelligence (PPAI-22) |
---|---|
City | VIRTUAL |
Period | 28/02/2022 → … |
ID: 341484877