PRIVACY ENGINEERING PROGRAM

Case Study

Problem:

Deidentification algorithms (DEID) are susceptible to the same bias and privacy issues that impact other data analytics and machine learning applications. NIST is attempting to create a system that enables to evaluation and risk of DEIDs.


Solution:

Research DEID vulnerabilities and bias to better understand how to mitigate them.

  • Create a benchmarking effort hosted by NIST to compare deidentification methods.
  • Develop a comprehensive open-source evaluation suite for diagnosing failure points of generative AI and other deidentification techniques on tabular data, with focus on artifacts and bias issues that arise for diverse populations.
  • Compare the performance of differential privacy with traditional privacy techniques on both privacy and utility.

Outcome:

  • This effort has collected, evaluated, and publicly shared over 450 synthetic data samples from academia, industry, and government organizations across the world.
  • See our live work here