Yilin Yang MSc Thesis Presentation

Date

Name: Yilin Yang
Date: Friday, March 22nd
Time: 4 pm – 5 pm
Location: ICCS 304

Zoom: https://ubc.zoom.us/j/2126372890?pwd=Q0wvQ0w0WkdGNWcvakVDTzRjRWF0QT09&omn=68969447753
Meeting ID: 212 637 2890
Passcode: 268852
Supervisors: is Mijung Park and Xiaoxiao Li

Title: Differentially Private Neural Tangent Kernels for Privacy-Preserving Data Generation and Distillation

Abstract:
With the increasing interest in Deep Learning, data safety issues have become more prevalent as we rely more on Artificial Intelligence. Adversaries can easily obtain sensitive information through various attacks, this dramatically discourages patients and clients from contributing invaluable data that may be beneficial to research. This problem facilitates the need for a gold standard privacy notion. In recent years, Differential Privacy (DP) has been recognized as a gold standard notion of privacy. Among the current popular DP methods, Maximum mean discrepancy (MMD) is a particularly useful distance metric for differentially private data generation. When used with finite-dimensional features it allows us to summarize and privatize the data distribution once, which we can repeatedly use during generator training without further privacy loss. An important question in this framework is, then, what features are useful to distinguish between real and synthetic data distributions, and whether those enable us to generate quality synthetic data. This work considers using the features of neural tangent kernels (NTKs), more precisely empirical NTKs (e-NTKs). We find that, perhaps surprisingly, the expressiveness of the untrained e-NTK features is comparable to that of the features taken from pre-trained perceptual features using public data. As a result, our method improves the privacy-accuracy trade-off compared to other state-of-the-art methods, without relying on any public data, as demonstrated on several tabular and image benchmark datasets. In addition, we also extend NTK to Data Distillation (DD) in federated learning (FL) settings, where we aim to condense sensitive information into a small set of images for deep learning training in a DP manner, we show that our method obtains meaningful results even under class imbalance and spuriously correlated image datasets.