We study differentially private (DP) optimization algorithms for stochastic and empirical objectives which are neither smooth nor convex, and propose methods that return a Goldstein-stationary point with sample complexity bounds that improve on existing works.
We start by providing a single-pass -DP algorithm that returns an -stationary point as long as the dataset is of size , which is times smaller than the algorithm of Zhang et al. [2024] for this task, where is the dimension.
We then provide a multi-pass polynomial time algorithm which further improves the sample complexity to , by designing a sample efficient ERM algorithm, and proving that Goldstein-stationary points generalize from the empirical loss to the population loss.
† Work partially done during Apple internship