Differential privacy is a rigorous, worst-case notion of privacy-preserving computation. Informally, a probabilistic program is differentially private if the participation of a single individual in the input database has a limited effect on the program's distribution on outputs. More technically, differential privacy is a quantitative 2-safety property that bounds the distance between the output distributions of a probabilistic program on adjacent inputs. Like many 2-safety properties, differential privacy lies outside the scope of traditional verification techniques. Existing approaches to enforce privacy are based on intricate, non-conventional type systems, or customized relational logics. These approaches are difficult to implement and often cumbersome to use.
We present an alternative approach that verifies differential privacy by standard, non-relational reasoning on non-probabilistic programs. Our approach transforms a probabilistic program into a non-probabilistic program which simulates two executions of the original program. We prove that if the target program is correct with respect to a Hoare specification, then the original probabilistic program is differentially private. We provide a variety of examples from the differential privacy literature to demonstrate the utility of our approach. Finally, we compare our approach with existing verification techniques for privacy.
|Title of host publication||Proceedings of the 2014 IEEE 27th Computer Security Foundations Symposium, CSF 2014|
|Publisher||IEEE Computer Society|
|Number of pages||14|
|Publication status||Published - 2014|
|Event||27th IEEE Computer Security Foundations Symposium - Vienna, Austria|
Duration: 19 Jul 2014 → 22 Jul 2014
|Conference||27th IEEE Computer Security Foundations Symposium|
|Abbreviated title||CSF 2014|
|Period||19/07/14 → 22/07/14|
- Differential privacy
- Hoare logic
- Probabilistic hoare logic
- Relational hoare logic