Design a site like this with WordPress.com
Get started

A Primer on Private Statistics – Part I

By Gautam Kamath and Jonathan Ullman

Differentially private statistics is a very lively research area, and has seen a lot of activity in the last couple years. While the phrasing is a slight departure from previous work which focused on estimation with worst-case datasets, it turns out that the differences are often superficial. In a short series of blog posts, we hope to educate readers on some of the recent advancements in this area, as well as shed light on some of the connections between the old and the new. We’ll describe the settings, cover a couple of technical examples, and give pointers to some other directions in the area. Thanks to Adam Smith for helping kick off this project, Clément Canonne, Aaron Roth, and Thomas Steinke for helpful comments, and Luca Trevisan for his LaTeX2WP script.

1. Introduction

Statistics and machine learning are now ubiquitous in data analysis. Given a dataset, one immediately wonders what it allows us to infer about the underlying population. However, modern datasets don’t exist in a vacuum: they often contain sensitive information about the individuals they represent. Without proper care, statistical procedures will result in gross violations of privacy. Motivated by the shortcomings of ad hoc methods for data anonymization, Dwork, McSherry, Nissim, and Smith introduced the celebrated notion of differential privacy [DMNS06].

From its inception, some of the driving motivations for differential privacy were applications in statistics and the social sciences, notably disclosure limitation for the US Census. And yet, the lion’s share of differential privacy research has taken place within the computer science community. As a result, the specific applications being studied are often not formulated using statistical terminology, or even as statistical problems. Perhaps most significantly, much of the early work in computer science (though definitely not all) focus on estimating some property of a dataset rather than estimating some property of an underlying population.

Although the earliest works exploring the interaction between differential privacy and classical statistics go back to at least 2009 [VS09,FRY10], the emphasis on differentially private statistical inference in the computer science literature is somewhat more recent. However, while earlier results on differential privacy did not always formulate problems in a statistical language, statistical inference was a key motivation for most of this work. As a result many of the techniques that were developed have direct applications in statistics, for example establishing minimax rates for estimation problems.

The purpose of this series of blog posts is to highlight some of those results in the computer science literature, and present them in a more statistical language. Specifically, we will discuss:

  • Tight minimax lower bounds for privately estimating the mean of a multivariate distribution over {{\mathbb R}^d}, using the technique of tracing attacks developed in [BUV14,DSSUV15, BSU17, SU17a, SU17b, KLSU19].

     

  • Upper bounds for estimating a distribution in Kolmogorov distance, using the ubiquitous binary-tree mechanism introduced in [DNPR10,CSS11].

In particular, we hope to encourage computer scientists working on differential privacy to pay more attention to the applications of their methods in statistics, and share with statisticians many of the powerful techniques that have been developed in the computer science literature.

 

1.1. Formulating Private Statistical Inference

Essentially every differentially private statistical estimation task can be phrased using the following setup. We are given a dataset {X = (X_1, \dots, X_n)} of size {n}, and we wish to design an algorithm {M \in \mathcal{M}} where {\mathcal{M}} is the class of mechanisms that are both:

  1. differentially private, and
  2. accurate, either in expectation or with high probability, according to some task-specific measure.

A few comments about this framework are in order. First, although the accuracy requirement is stochastic in nature (i.e., an algorithm might not be accurate depending on the randomness of the algorithm and the data generation process), the privacy requirement is worst-case in nature. That is, the algorithm must protect privacy for every dataset {X}, even those we believe are very unlikely.

Second, the accuracy requirement is stated rather vaguely. This is because the notion of accuracy of an algorithm is slightly more nuanced, depending on whether we are concerned with empirical or population statistics. A particular emphasis of these blog posts is to explore the difference (or, as we will see, the lack of a difference) between these two notions of accuracy. The former estimates a quantity of the observed dataset, while the latter estimates a quantity of an unobserved distribution which is assumed to have generated the dataset.

More precisely, the former can be phrased in terms of empirical loss, of the form:

\displaystyle \min_{M \in \mathcal{M}}~\max_{X \in \mathcal{X}}~\mathop{\mathbb E}_M(\ell(M(X), f(X))),

where {\mathcal{M}} is some class of randomized estimators (e.g., differentially private estimators), {\mathcal{X}} is some class of datasets, {f} is some quantity of interest, and {\ell} is some loss function. That is, we’re looking to find an estimator that has small expected loss on any dataset in some class.

In contrast, statistical minimax theory looks at statements about population loss, of the form:

\displaystyle \min_{M \in \mathcal{M}}~\max_{P \in \mathcal{P}}~\mathop{\mathbb E}_{X \sim P, M}(\ell(M(X),f(P))),

where {\mathcal{P}} is some family of distributions over datasets (typically consisting of i.i.d. samples). That is, we’re looking to find an estimator that has small expected loss on random data from any distribution in some class. In particular, note that the randomness in this objective additionally includes the data generating procedure {X \sim P}.

These two formulations are formally very different in several ways. First, the empirical formulation requires an estimator to have small loss on worst-case datasets, whereas the statistical formulation only requires the estimator to have small loss on average over datasets drawn from certain distributions. Second, the statistical formulation requires that we estimate the unknown quantity {f(P)}, and thus necessitates a solution to the non-private estimation problem. On the other hand, the empirical formulation only asks us to estimate the known quantity {f(X)}, and thus if there were no privacy constraint it would always be possible to compute {f(X)} exactly. Third, typically in the statistical formulation, we require that the dataset is drawn i.i.d., which means that we are more constrained when proving lower bounds for estimation than we are in the empirical problem.

However, in practice (more precisely, in the practice of doing theoretical research), these two formulations are more alike than they are different, and results about one formulation often imply results about the other formulation. On the algorithmic side, classical statistical results will often tell us that {\ell(f(X),f(P))} is small, in which case algorithms that guarantee {\ell(M(X),f(X))} is small also guarantee {\ell(M(X),f(P))} is small.

Moreover, typical lower bound arguments for empirical quantities are often statistical in nature. These typically involving constructing some simple “hard distribution” over datasets such that no private algorithm can estimate well on average for this distribution, and thus these lower bound arguments also apply to estimating population statistics for some simple family of distributions. We will proceed to give some examples of estimation problems that were originally studied by computer scientists with the empirical formulation in mind. These results either implicitly or explicitly provide solutions to the corresponding population versions of the same problems—our goal is to spell out and illustrate these connections.

2. DP Background

Let {X = (X_1,X_2,\dots,X_n) \in \mathcal{X}^n} be a collection of {n} samples where each individual sample comes from the domain {\mathcal{X}}. We say that two samples {X,X' \in \mathcal{X}^*} are adjacent, denoted {X \sim X'}, if they differ on at most one individual sample. Intuitively, a randomized algorithm {M}, which is often called a mechanism for historical reasons, is differentially private if the distribution of {M(X)} and {M(X')} are similar for every pair of adjacent samples {X,X'}.

Definition 1 ([DMNS06]) A mechanism {M \colon \mathcal{X}^n \rightarrow \mathcal{R}} is {(\epsilon,\delta)}-differentially private if for every pair of adjacent datasets {X \sim X'}, and every (measurable) {R \subseteq R}

\displaystyle \mathop{\mathbb P}(M(X) \in R) \leq e^{\epsilon} \cdot \mathop{\mathbb P}(M(X') \in R) + \delta.

 

We let {\mathcal{M}_{\epsilon,\delta}} denote the set of mechanisms that satisfy {(\epsilon,\delta)}-differential privacy.

Remark 1 To simplify notation, and to maintain consistency with the literature, we adopt the convention of defining the mechanism only for a fixed sample size {n}. What this means in practice is that the mechanisms we describe treat the sample size {n} is public information that need not be kept private. While one could define a more general model where {n} is not fixed, it wouldn’t add anything to this discussion other than additional complexity.

Remark 2 In these blog posts, we stick to the most general formulation of differential privacy, so-called approximate differential privacy, i.e. {(\epsilon,\delta)}-differential privacy for {\delta > 0} essentially because this is the notion that captures the widest variety of private mechanisms. Almost all of what follows would apply equally well, with minor technical modifications, to slightly stricter notions of concentrated differential privacy [DR16, BS16, BDRS18], Rényi differential privacy [Mir17], or Gaussian differential privacy [DRS19]. While so-called pure differential privacy, i.e. {(\epsilon,0)}-differential privacy has also been studied extensively, this notion is artificially restrictive and excludes many differentially private mechanisms.

A key property of differential privacy that helps when desinging efficient estimators is closure under postprocessing:

Lemma 2 (Post-Processing [DMNS06])  If {M \colon \mathcal{X}^n \rightarrow \mathcal{R}} is {(\epsilon,\delta)}-differentially private and {M' \colon \mathcal{R} \rightarrow \mathcal{R}'} is any randomized algorithm, then {M' \circ M} is {(\epsilon,\delta)}-differentially private.

The estimators we present in this work will use only one tool for achieving differential privacy, the Gaussian Mechanism.

Lemma 3 (Gaussian Mechanism) Let {f \colon \mathcal{X}^n \rightarrow {\mathbb R}^d} be a function and let

\displaystyle \Delta_{f} = \sup_{X\sim X'} \| f(X) - f(X') \|_2

denote its {\ell_2}-sensitivity. The Gaussian mechanism

\displaystyle M(X) = f(X) + \mathcal{N}\left(0 , \frac{2 \log(2/\delta)}{\epsilon^2} \cdot \Delta_{f}^2 \cdot {\mathbb I}_{d \times d} \right)

satisfies {(\epsilon,\delta)}-differential privacy.

3. Mean Estimation in {{\mathbb R}^d}

Let’s take a dive into the problem of private mean estimation for some family {\mathcal{P}} of multivariate distributions over {{\mathbb R}^d}. This problem has been studied for various families {\mathcal{P}} and various choices of loss function. Here we focus on perhaps the simplest variant of the problem, in which {\mathcal{P}} contains distributions of bounded support {[\pm 1]^d} and the loss is the {\ell_2^2} error. We emphasize, however, that the methods we discuss here are quite versatile and can be used to derive minimax bounds for other variants of the mean-estimation problem.

Note that, by a simple argument, the non-private minimax rate for this class is achieved by the empirical mean, and is

\displaystyle \max_{P \in \mathcal{P}} \mathop{\mathbb E}_{X_{1 \cdots n} \sim P}(\| \overline{X} - \mu\|_2^2) = \frac{d}{n}. \ \ \ \ \ (1)

The main goal of this section is to derive the minimax bound

\displaystyle \min_{M \in \mathcal{M}_{\epsilon,\frac{1}{n}}} \max_{P \in \mathcal{P}} \mathop{\mathbb E}_{X_{1 \cdots n} \sim P}(\| M(X_{1 \cdots n}) - \mu \|_2^2) = \frac{d}{n} + \tilde\Theta\left(\frac{d^2}{\epsilon^2 n^2}\right). \ \ \ \ \ (2)

Recall that {\tilde \Theta(f(n))} refers to a function which is both {O(f(n) \log^{c_1} f(n))} and {\Omega(f(n) \log^{c_2} f(n))} for some constants {c_1, c_2}. The proof of this lower bound is based on robust tracing attacks, also called membership inference attacks, which were developed in a chain of papers [BUV14, DSSUV15, BSU17, SU17a, SU17b, KLSU19]. We remark that this lower bound is almost identical to the minimax bound for mean estimation proven in the much more recent work of Cai, Wang, and Zhang [CWZ19], but it lacks tight dependence on the parameter {\delta}, which we discuss in the following remark.

Remark 3 The choice of {\delta = 1/n} in (2) may look strange at first. For the upper bound this choice is arbitrary—as we will see, we can upper bound the rate for any {\delta > 0} at a cost of a factor of {O(\log(1/\delta))}. The lower bound applies only when {\delta \leq 1/n}. Note that the rate is qualitatively different when {\delta \gg 1/n}. However, we emphasize that {(\epsilon,\delta)}-differential privacy is not a meaningful privacy notion unless {\delta \ll 1/n}. In particular, the mechanism that randomly outputs {\delta n} elements of the sample satisfies {(0,\delta)}-differential privacy. However, when {\delta \gg 1/n}, this mechanism completely violates the privacy of {\gg 1} person in the dataset. Moreover, taking the empirical mean of these {\delta n} samples gives rate {d/\delta n}, which would violate our lower bound when {\delta} is large enough. On the other hand, we would expect the minimax rate to become slower when {\delta \ll 1/n}. This expectation is, in fact, correct, however the proof we present does not give the tight dependence on the parameter {\delta}. See [SU17a] for a refinement that can obtain the right dependence on {\delta}, and [CWZ19] for the details of how to apply this refinement in the i.i.d. setting.

3.1. A Simple Upper Bound

Theorem 4 For every {n \in {\mathbb N}}, and every {\epsilon,\delta > 0}, there exists an {(\epsilon,\delta)}-differentially private private mechanism {M} such that

\displaystyle \max_{P \in \mathcal{P}} \mathop{\mathbb E}_{X_{1 \cdots n} \sim P}(\| M(X_{1 \cdots n}) - \mu \|_2^2) \leq \frac{d}{n} + \frac{2 d^2 \log(2/\delta)}{\epsilon^2 n^2}. \ \ \ \ \ (3)

Proof: Define the mechanism

\displaystyle M(X_{1 \cdots n}) = \overline{X} + \mathcal{N}\left(0, \frac{2 d \log(2/\delta)}{\varepsilon^2 n^2} \cdot \mathbb{I}_{d \times d} \right). \ \ \ \ \ (4)

This mechanism satisfies {(\epsilon,\delta)}-differential privacy by Lemma 3, noting that for any pair of adjacent samples {X_{1 \cdots n}} and {X'_{1 \cdots n}}, {\| \overline{X} - \overline{X}'\|_2^2 \leq \frac{d}{n^2}}.

Let {\sigma^2 = \frac{2 d \log(2/\delta)}{\varepsilon^2 n^2}}. Note that since the Gaussian noise has mean {0} and is independent of {\overline{X} - \mu}, we have

\displaystyle \begin{array}{rll} \mathop{\mathbb E}(\| M(X_{1 \cdots n}) - \mu \|_2^2) ={} &\mathop{\mathbb E}(\| \overline{X} - \mu \|_2^2) + \mathop{\mathbb E}(\| M(X_{1 \cdots n}) - \overline{X} \|_2^2 ) \\ \leq{} &\frac{d}{n} + \mathop{\mathbb E}(\| M(X_{1 \cdots n}) - \overline{X} \|_2^2 ) \\ ={} &\frac{d}{n} + \mathop{\mathbb E}(\| \mathcal{N}(0, \sigma^2 \mathbb{I}_{d \times d}) \|_2^2 ) \\ ={} &\frac{d}{n} + \sigma^2 d \\ ={} &\frac{d}{n} + \frac{2 d^2 \log(2/\delta)}{\epsilon^2 n^2}. \end{array}

\Box

 

3.2. Minimax Lower Bounds via Tracing

Theorem 5 For every {n, d \in {\mathbb N}}, {\epsilon > 0}, and {\delta < 1/96n}, if {\mathcal{P}} is the class of all product distributions on {\{\pm 1\}^{d}}, then for some constant {C > 0},

\displaystyle \min_{M \in \mathcal{M}_{\epsilon,\delta}} \max_{P \in \mathcal{P}} \mathop{\mathbb E}_{X_{1 \cdots n} \sim P,M}(\| M(X_{1 \cdots n}) - \mu \|_2^2) = \Omega\left(\min \left\{ \frac{d^2}{ \epsilon^2 n^2}, d \right\}\right).

 

Note that it is trivial to achieve error {d} for any distribution using the mechanism {M(X_{1 \cdots n}) \equiv 0}, so the result says that the error must be {\Omega(d^2/\epsilon^2 n^2)} whenever this error is significantly smaller than the trivial error of {d}.

Tracing Attacks.

Before giving the formal proof, we will try to give some intuition for the high-level proof strategy. The proof can be viewed as constructing a tracing attack [DSSU17] (sometimes called a membership inference attack) of the following form. There is an attacker who has the data of some individual {Y} chosen in one of the two ways: either {Y} is a random element of the sample {X}, or {Y} is an independent random sample from the population {P}. The attacker is given access to the true distribution {P} and the outcome of the mechanism {M(X)}, and wants to determine which of the two is the case. If the attacker can succeed, then {M} cannot be differentially private. To understand why this is the case, if {Y} is a member of the dataset, then the attacker should say {Y} is in the dataset, but if we consider the adjacent dataset {X'} where we replace {Y} with some independent sample from {P}, then the attacker will now say {Y} is independent of the dataset. Thus, {M(X)} and {M(X')} cannot be close in the sense required by differential privacy.

Thus, the proof works by constructing a test statistic {Z = Z(M(X),Y,P),} that the attacker can use to distinguish the two possibilities for {Y}. In particular, we show that there is a distribution over populations {P} such that {\mathop{\mathbb E}(Z)} is small when {Y} is independent of {X}, but for every sufficiently accurate mechanism {M}, {\mathop{\mathbb E}(Z)} is large when {Y} is a random element of {X}.

Proof of Theorem 5.

The proof that we present closely follows the one that appears in Thomas Steinke’s Ph.D. thesis [Ste16].

We start by constructing a “hard distribution” over the family of product distributions {\mathcal{P}}. Let {\mu = (\mu^1,\dots,\mu^d) \in [-1,1]^d} consist of {d} independent draws from the uniform distribution on {[-1,1]} and let {P_{\mu}} be the product distribution over {\{\pm 1\}^{d}} with mean {\mu}. Let {X_1,\dots,X_n \sim P_{\mu}} and {X = (X_1,\dots,X_n)}.

Let {M \colon \{\pm 1\}^{n \times d} \rightarrow [\pm 1]^d} be any {(\epsilon,\delta)}-differentially private mechanism and let

\displaystyle \alpha^2 = \mathop{\mathbb E}_{\mu,X,M}(\| M(X) - \mu\|_2^2 ) \ \ \ \ \ (5)

be its expected loss. We will prove the desired lower bound on {\alpha^2}.

For every element {i}, we define the random variables

\displaystyle Z_i = Z_i(M(X),X_i,\mu) = \left\langle M(X) - \mu, X_i - \mu \right\rangle \\ Z'_{i} = Z'_i(M(X_{\sim i}), X_i, \mu) = \left\langle M(X_{\sim i}) - \mu, X_i - \mu \right\rangle,

where {X_{\sim i}} denotes {(X_1,\dots,X'_i,\dots,X_n)} where {X'_i} is an independent sample from {P_\mu}. Our goal will be to show that, privacy and accuracy imply both upper and lower bounds on {\mathop{\mathbb E}(\sum_i Z_i)} that depend on {\alpha}, and thereby obtain a bound on {\alpha^2}.

The first claim says that, when {X_i} is not in the sample, then the likelihood random variable has mean {0} and variance controlled by the expected {\ell_2^2} error of the mechanism.

Claim 1 For every {i}, {\mathop{\mathbb E}(Z'_i) = 0}, {\mathrm{Var}(Z'_i) \leq 4\alpha^2}, and {\|Z'_i\|_\infty \leq 4d}.

Proof: Conditioned on any value of {\mu}, {M(X_{\sim i})} is independent from {X_i}. Moreover, {\mathop{\mathbb E}(X_i - \mu) = 0}, so we have

\displaystyle \begin{array}{rll} &\mathop{\mathbb E}_{\mu,X,M}(\langle M(X_{\sim i}) - \mu, X_i - \mu \rangle) \\ = &\mathop{\mathbb E}_{\mu}(\mathop{\mathbb E}_{X,M}(\langle M(X_{\sim i}) - \mu, X_i - \mu \rangle)) \\ = &\mathop{\mathbb E}_{\mu}(\left\langle \mathop{\mathbb E}_{X,M}(M(X_{\sim i}) - \mu), \mathop{\mathbb E}_{X,M}(X_i - \mu) \right \rangle ) \\ = &\mathop{\mathbb E}_{\mu}(\left\langle \mathop{\mathbb E}_{X,M}(M(X_{\sim i}) - \mu), 0 \right \rangle ) \\ = &0. \end{array}

For the second part of the claim, since {(X_i - \mu)^2 \leq 4}, we have {\mathrm{Var}(Z'_i) \leq 4 \cdot \mathop{\mathbb E}(\| M(X) - \mu \|_2^2) = 4\alpha^2}. The final part of the claim follows from the fact that every entry of {M(X_{\sim i}) - \mu} and {X_i - \mu} is bounded by {2} in absolute value, and {Z'_i} is a sum of {d} such entries, so its absolute value is always at most {4d}. \Box

The next claim says that, because {M} is differentially private, {Z_i} has similar expectation to {Z'_i}, and thus its expectation is also small.

Claim 2 {\mathop{\mathbb E}(\sum_{i=1}^{n} Z_i) \leq 4n\alpha \epsilon + 8n \delta d.}

Proof: The proof is a direct calculation using the following inequality, whose proof is relatively simple using the definition of differential privacy:

\displaystyle \mathop{\mathbb E}(Z_i) \leq \mathop{\mathbb E}(Z'_i) + 2\epsilon \sqrt{\mathrm{Var}(Z'_i)} + 2\delta \| Z'_i \|_\infty.

Given the inequality and Claim 1, we have

\displaystyle \mathop{\mathbb E}(Z_i) \leq 0 + (2\epsilon)(2\alpha) + (2\delta)(2d) = 4\epsilon \alpha + 8 \delta d .

The claim now follows by summing over all {i}. \Box

The final claim says that, because {M} is accurate, the expected sum of the random variables {Z_i} is large.

Claim 3 {\mathop{\mathbb E}(\sum_{i=1}^{n} Z_i) \geq \frac{d}{3} - \alpha^2.}

The proof relies on the following key lemma, whose proof we omit.

Lemma 6 (Fingerprinting Lemma [BSU17])  If {\mu \in [\pm 1]} is sampled uniformly, {X_1,\dots,X_n \in \{\pm 1\}^{n}} are sampled independently with mean {\mu}, and {f \colon \{\pm 1\}^n \rightarrow [\pm 1]} is any function, then

\displaystyle \mathop{\mathbb E}_{\mu,X}((f(X) - \mu) \cdot \sum_{i=1}^{n} (X_i - \mu)) \geq \frac{1}{3} - \mathop{\mathbb E}_{\mu,X}((f(X) - \mu)^2).

 

The lemma is somewhat technical, but for intuition, consider the case where {f(X) = \frac{1}{n}\sum_{i} X_i} is the empirical mean. In this case we have

\displaystyle \begin{array}{rcl} \mathop{\mathbb E}_{\mu,X}((f(X) - \mu) \cdot \sum_{i=1}^n (X_i - \mu)) ={} \mathop{\mathbb E}_{\mu}(\frac{1}{n} \sum_i \mathop{\mathbb E}_{X}( (X_i - \mu)^2) ) ={} \mathop{\mathbb E}_{\mu}(\mathrm{Var}(X_i)) = \frac{1}{3}. \end{array}

The lemma says that, when {\mu} is sampled this way, then any modification of {f} that reduces the correlation between {f(X)} and {\sum_i X_i} will increase the mean-squared-error of {f} proportionally.

We now prove Claim 3.

Proof: We can apply the lemma to each coordinate of the estimate {M(X)}.

\displaystyle \begin{array}{rll} \mathop{\mathbb E}(\sum_{i=1}^{n} Z_i) ={} &\mathop{\mathbb E}(\sum_{i=1}^{n} \left\langle M(X) - \mu, X_i - \mu \right\rangle) \\ ={} &\sum_{j=1}^{d} \mathop{\mathbb E}((M^j(X) - \mu^j)\cdot \sum_{i=1}^{n} (X_i^j - \mu^j)) \\ \geq{} &\sum_{j=1}^{d} \left( \frac{1}{3} - \mathop{\mathbb E}((M^j(X) - \mu^j)^2) \right) \\ ={} &\frac{d}{3} - \mathop{\mathbb E}(\| M(X) - \mu \|_2^2) ={} \frac{d}{3} - \alpha^2. \end{array}

The inequality is Lemma 6. \Box

Combining Claims 2 and 3 gives

\displaystyle \frac{d}{3} - \alpha^2 \leq 4n\alpha \epsilon + 8n \delta d. \ \ \ \ \ (6)

Now, if {\alpha^2 \geq \frac{d}{6}} then we’re done, so we’ll assume that {\alpha^2 \leq \frac{d}{6}}. Further, by our assumption on the value of {\delta}, {8n \delta d \leq \frac{d}{12}}. In this case we can rearrange terms and square both sides to obtain

\displaystyle \alpha^2 \geq{} \frac{1}{16 \epsilon^2 n^2} \left(\frac{d}{3} - \alpha^2 - 8 n\delta d\right)^2 \geq \frac{1}{16 \epsilon^2 n^2} \left(\frac{d}{12}\right)^2 = \frac{d^2}{2304 \epsilon^2 n^2}. \ \ \ \ \ (7)

Combining the two cases for {\alpha^2} gives {\alpha^2 \geq \min\{ \frac{d}{6}, \frac{d^2}{2304 \epsilon^2 n^2} \}}, as desired.

Bibliography

[BDRS18] Mark Bun, Cynthia Dwork, Guy N. Rothblum, and Thomas Steinke. Composable and versatile privacy via truncated CDP. STOC ’18.

[BS16] Mark Bun and Thomas Steinke. Concentrated differential privacy: Simplifications, extensions, and lower bounds. TCC ’16-B.

[BSU17] Mark Bun, Thomas Steinke, and Jonathan Ullman. Make up your mind: The price of online queries in differential privacy. SODA ’17.

[BUV14] Mark Bun, Jonathan Ullman, and Salil Vadhan. Fingerprinting codes and the price of approximate differential privacy. STOC ’14.

[CSS11] T-H Hubert Chan, Elaine Shi, and Dawn Song. Private and continual release of statistics. ACM Transactions on Information and System Security, 14(3):26, 2011.

[CWZ19] T. Tony Cai, Yichen Wang, and Linjun Zhang. The cost of privacy: Optimal rates of convergence for parameter estimation with differential privacy. arXiv, 1902.04495, 2019.

[DMNS06] Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. Calibrating noise to sensitivity in private data analysis. TCC ’06.

[DNPR10] Cynthia Dwork, Moni Naor, Toniann Pitassi, and Guy N. Rothblum. Differential privacy under continual observation. STOC ’10.

[DR16] Cynthia Dwork and Guy N. Rothblum. Concentrated differential privacy. arXiv, 1603.01887, 2016.

[DRS19] Jinshuo Dong, Aaron Roth, and Weijie J. Su. Gaussian differential privacy. arXiv, 1905.02383, 2019.

[DSSU17] Cynthia Dwork, Adam Smith, Thomas Steinke, Jonathan Ullman, and Salil Vadhan. Robust traceability from trace amounts. FOCS ’15.

[DSSUV15] Cynthia Dwork, Adam Smith, Thomas Steinke, and Jonathan Ullman. Exposed! a survey of attacks on private data. Annual Review of Statistics and Its Application, 4:61–84, 2017.

[FRY10] Stephen E. Fienberg, Alessandro Rinaldo, and Xiaolin Yang. Differential privacy and the risk-utility tradeoff for multi-dimensional contingency tables. PSD ’10.

[KLSU19] Gautam Kamath, Jerry Li, Vikrant Singhal, and Jonathan Ullman. Privately learning high-dimensional distributions. COLT ’19.

[Mir17] Ilya Mironov. Rényi differential privacy. CSF ’17.

[Ste16] Thomas Alexander Steinke. Upper and Lower Bounds for Privacy and Adaptivity in Algorithmic Data Analysis. PhD thesis, 2016.

[SU17a] Thomas Steinke and Jonathan Ullman. Between pure and approximate differential privacy. Journal of Privacy and Confidentiality, 7(2), 2017.

[SU17b] Thomas Steinke and Jonathan Ullman. Tight lower bounds for differentially private selection. FOCS ’17.

[VS09] Duy Vu and Aleksandra Slavković. Differential privacy for clinical trial data: Preliminary evaluations. ICDMW ’09.

Advertisement

One thought on “A Primer on Private Statistics – Part I”

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: