Differential privacy has grown from a theoretical idea into a real-world privacy solution with critical applications across government, tech, and research. In a recent expert interview, Tabitha Ogilvie, a PhD student at Royal Holloway specializing in fully homomorphic encryption (FHE) and differential privacy, shared her insights on how these technologies interact, the current state of deployment, and where the field is heading.
This article distills key takeaways from the conversation, offering a nuanced look into the intersection of encryption and privacy, real-world deployments, and challenges in balancing data utility and protection.
How Differential Privacy and FHE Compare
Although FHE and differential privacy are both privacy-enhancing technologies, they address different parts of the data lifecycle:
- FHE encrypts data so computations can be performed without ever decrypting it—data remains hidden during processing.
- Differential privacy, by contrast, focuses on protecting the output of computations by adding calibrated noise, preventing any individual data point from being inferred.
As Tabitha noted:
“FHE ensures full secrecy during computation, whereas differential privacy allows data to be processed openly but protects the outcome.”
Can These Technologies Work Together?
Tabitha shared her recent research presented at CT-RSA, where she explored whether the noise already present in FHE schemes might satisfy the conditions of differential privacy. In certain cases, yes—FHE noise can help fulfill differential privacy guarantees, creating a potential “two-for-one” privacy model.
“Under some conditions, the noise from homomorphic encryption can give you differential privacy too. That’s an exciting possibility for dual protection.”
Real-World Applications of Differential Privacy
Differential privacy is no longer just academic. It’s already deployed in key sectors:
Government:
- US Census Bureau uses differential privacy to publish population statistics while preventing re-identification of individuals.
- UK Office for National Statistics (ONS) has piloted DP in releasing public datasets, especially where detailed location or demographic data could become sensitive.
Tech Industry:
- Apple implements local differential privacy (LDP) to collect usage statistics (like most-used emojis) without accessing personal data.
- Google has deployed LDP in its Chrome browser via the RAPPOR system.
Tabitha explains:
“Apple doesn’t want to know which emoji you use—they want aggregate statistics. With LDP, users add their own noise before sending data.”
Key Challenges in Deploying Differential Privacy
1. Balancing Accuracy and Privacy
DP introduces a fundamental trade-off:
- More noise = stronger privacy, lower accuracy.
- Less noise = more useful insights, weaker privacy.
Finding this balance remains an ongoing research challenge:
“You can do things very well if you don’t care about privacy—or very privately if you don’t care about results. Finding the middle ground is hard.”
2. Setting the Privacy Budget (ε)
The parameter ε\varepsilonε (epsilon) defines the privacy guarantee, but there’s no standard for what value is acceptable. Tabitha points out:
“It’s a bit of a zoo out there. Everyone’s making their own choices.”
Some organizations set ε\varepsilonε values so high (e.g., 10 or more) that privacy guarantees are mathematically weak, even if they can still be labeled “differentially private.”
Future Outlook: Legal and Technical Standardization
Looking ahead, Tabitha sees two main drivers shaping the future of differential privacy:
- Standardization of ε-values
Creating global or sector-specific norms will prevent misuse of the label “differential privacy.” - Legal Mandates
Regulations like the GDPR may evolve to require DP for certain data-sharing practices.
“I wouldn’t be surprised if we see laws requiring DP when publishing sensitive datasets.”
Differential Privacy and Machine Learning
One of the most promising frontiers for DP is in machine learning, especially for large language models (LLMs) like ChatGPT. These models are known to memorize training data, which can lead to privacy leaks or IP violations.
“If a model was trained with differential privacy, it wouldn’t memorize or leak data, like New York Times articles.”
However, integrating DP into LLM training is computationally expensive and technically challenging, making it a cutting-edge area of exploration.
Conclusion
Tabitha Ogilvie’s interview offers valuable insights into both the promise and limitations of differential privacy today. As she emphasizes, real deployment demands more than math—it requires policy decisions, standardization, and public transparency.
Whether in government statistics, smartphone telemetry, or AI model training, differential privacy stands as a leading privacy-preserving technique. But its effectiveness relies heavily on how parameters are chosen and how honestly it’s implemented
We love to share our knowledge on current technologies. Our motto is ‘Do our best so that we can’t blame ourselves for anything“.