My research is in data privacy and trustworthy machine learning. I am interested in designing methods to quantitatively measure the privacy risks of data processing algorithms, and build scalable schemes for generalizable machine learning models that are also privacy-preserving, robust, interpretable, and fair. Our research is on analyzing the trade-offs between different pillars of trust in machine learning for practical scenarios, and on resolving such conflicts with rigorous mathematical guarantees. We are currently working on many interesting problems in this domain, including trustworthy federated learning, differential privacy for machine learning, fairness versus privacy in machine learning, privacy-aware model explanations, privacy-preserving data synthesis, and quantifying privacy risks of data analytics. Our research is supported by research awards and grants from Intel, Google, Facebook, VMWare, NEC, Huawei, AI Singapore, NUS, Singapore MoE, and NRF.
I have open positions for PhD students and postdoctoral researchers. Please send me your CV and research statement.
Intel's 2023 Outstanding Researcher Award
Asian Young Scientist Fellowship 2023
Best Paper Award at ACM Conference on Fairness, Accountability, and Transparency (FAccT) 2023
NUS School of Computing Faculty Teaching Excellence Award 2023 ➙ Recent Student Feedback: [CS5562-Trustworthy Machine Learning]
Facebook Faculty Research Award 2021
IEEE Security and Privacy (S&P) Test-of-Time Award 2021
VMWare Early Career Faculty Award 2021
Intel Research Award (Private AI Collaborative Research Institute) 2021
NUS Presidential Young Professorship, 2019-2023
NUS Early Career Research Award 2019
Caspar Bowden Award for Outstanding Research in Privacy Enhancing Technologies 2018
Swiss National Science Foundation Fellowship 2013
Runner-up for PET Award for Outstanding Research in Privacy Enhancing Technologies 2012
Sajjad Zarifzadeh, Philippe Liu, and Reza Shokri
➙ Low-Cost High-Power Membership Inference Attacks
➙ [talk]
Oral International Conference on Machine Learning (ICML), 2024
Jiayuan Ye, Anastasia Borovykh, Soufiane Hayou, and Reza Shokri
➙ Leave-one-out Distinguishability in Machine Learning
International Conference on Learning Representations (ICLR), 2024
Niloofar Mireshghallah, Hyunwoo Kim, Xuhui Zhou, Yulia Tsvetkov, Maarten Sap, Reza Shokri, and Yejin Choi
➙ Can LLMs Keep a Secret? Testing Privacy Implications of Language Models via Contextual Integrity Theory
Spotlight International Conference on Learning Representations (ICLR), 2024
Jiayuan Ye, Zhenyu Zhu, Fanghui Liu, Reza Shokri, and Volkan Cevher
➙ Initialization Matters: Privacy-Utility Analysis of Overparameterized Neural Networks
Conference on Neural Information Processing Systems (NeurIPS), 2023
Also to be Presented at the Theory and Practice of Differential Privacy (TPDP), 2023
Chendi Wang, Buxin Su, Jiayuan Ye, Reza Shokri, and Weijie J. Su
➙ Unified Enhancement of Privacy Bounds for Mixture Mechanisms via f-Differential Privacy
Conference on Neural Information Processing Systems (NeurIPS), 2023
Prakhar Ganesh, Hongyan Chang, Martin Strobel, and Reza Shokri
➙ On The Impact of Machine Learning Randomness on Group Fairness
➙ [talk by Prakhar Ganesh]
ACM Conference on Fairness, Accountability, and Transparency (FAccT), 2023
Best Paper Award
Hongyan Chang and Reza Shokri
➙ Bias Propagation in Federated Learning
International Conference on Learning Representations (ICLR), 2023
Zebang Shen, Jiayuan Ye, Anmin Kang, Hamed Hassani, and Reza Shokri
➙ Share Your Representation Only: Guaranteed Improvement of the Privacy-Utility Tradeoff in Federated Learning
International Conference on Learning Representations (ICLR), 2023
Jiayuan Ye and Reza Shokri
➙ Differentially Private Learning Needs Hidden State (Or Much Faster Convergence)
Conference on Neural Information Processing Systems (NeurIPS), 2022
Also presented at the Symposium on Foundations of Responsible Computing (FORC), 2022
Jiayuan Ye, Aadyaa Maddi, Sasi Kumar Murakonda, Vincent Bindschaedler, and Reza Shokri
➙ Enhanced Membership Inference Attacks against Machine Learning Models
➙ [code]
ACM Conference on Computer and Communications Security (CCS), 2022
Florian Tramèr, Reza Shokri, Ayrton San Joaquin, Hoang Le, Matthew Jagielski, Sanghyun Hong, and Nicholas Carlini
➙ Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets
ACM Conference on Computer and Communications Security (CCS), 2022
Media:
The Register,
TechXplore
Fatemehsadat Mireshghallah, Kartik Goyal, Archit Uniyal, Taylor Berg-Kirkpatrick, and Reza Shokri
➙ Quantifying Privacy Risks of Masked Language Models Using Membership Inference Attacks
The Conference on Empirical Methods in Natural Language Processing (EMNLP), 2022
Hannah Brown, Katherine Lee, Fatemehsadat Mireshghallah, Reza Shokri, and Florian Tramer
➙ What Does it Mean for a Language Model to Preserve Privacy?
ACM Conference on Fairness, Accountability, and Transparency (FAccT), 2022
Media:
MIT Technology Review
Neel Patel, Reza Shokri, and Yair Zick
➙ Model Explanations with Differential Privacy
ACM Conference on Fairness, Accountability, and Transparency (FAccT), 2022
Rishav Chourasia*, Jiayuan Ye*, and Reza Shokri
➙ Differential Privacy Dynamics of Langevin Diffusion and Noisy Gradient Descent
➙ [talk by Jiayuan Ye]
Spotlight Conference on Neural Information Processing Systems (NeurIPS), 2021
Hongyan Chang, and Reza Shokri
➙ On the Privacy Risks of Algorithmic Fairness
IEEE European Symposium on Security and Privacy (EuroSP), 2021
Also presented at FTC PrivacyCon, 2021
Reza Shokri, Martin Strobel, and Yair Zick
➙ On the Privacy Risks of Model Explanations
AAAI/ACM Conference on AI, Ethics, and Society (AIES), 2021
Also presented at FTC PrivacyCon, 2021
Media:
Harvard Business Review
Sasi Kumar Murakonda, Reza Shokri, and George Theodorakopoulos
➙ Quantifying the Privacy Risks of Learning High-Dimensional Graphical Models
International Conference on Artificial Intelligence and Statistics (AISTATS), 2021
Hongyan Chang, Ta Duy Nguyen, Sasi Kumar Murakonda, Ehsan Kazemi, and Reza Shokri
➙ On Adversarial Bias and the Robustness of Fair Machine Learning
arXiv:2006.08669, 2020
Te Juin Lester Tan, and Reza Shokri
➙ Bypassing Backdoor Detection Algorithms in Deep Learning
➙ [talk]
IEEE European Symposium on Security and Privacy (EuroSP), 2020
Congzheng Song, and Reza Shokri
➙ Robust Membership Encoding: Inference Attacks and Copyright Protection for Deep Learning
ACM ASIA Conference on Computer and Communications Security (ASIACCS), 2020
Anshul Aggarwal, Trevor Carlson, Reza Shokri, and Shruti Tople
➙ SOTERIA: In Search of Efficient Neural Networks for Private Inference
arXiv:2007.12934, 2020
Liwei Song, Reza Shokri, and Prateek Mittal
➙ Privacy Risks of Securing Machine Learning Models against Adversarial Examples
➙ [talk by L. Song]
ACM Conference on Computer and Communications Security (CCS), 2019
Milad Nasr, Reza Shokri, and Amir Houmansadr
➙ Comprehensive Privacy Analysis of Deep Learning: Passive and Active White-box Inference Attacks against Centralized and Federated Learning
➙ [code]
➙ [talk by M. Nasr]
IEEE Symposium on Security and Privacy (S&P) -- Oakland, 2019
Hongyan Chang, Virat Shejwalkar, Reza Shokri, and Amir Houmansadr
➙ Cronus: Robust and Heterogeneous Collaborative Learning with Black-Box Knowledge Transfer
arXiv:1912.11279, 2019
Milad Nasr, Reza Shokri, and Amir Houmansadr
➙ Machine Learning with Membership Privacy using Adversarial Regularization
➙ [code]
➙ [talk by A. Houmansadr]
ACM Conference on Computer and Communications Security (CCS), 2018.
Tyler Hunt, Congzheng Song, Reza Shokri, Vitaly Shmatikov, and Emmett Witchel
➙ Chiron: Privacy-preserving Machine Learning as a Service
arXiv:1803.05961, 2018
Media:
ZDNet
Reza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov
➙ Membership Inference Attacks against Machine Learning Models
➙ [code]
➙ [tool]
➙ [datasets]
➙ [talk]
IEEE Symposium on Security and Privacy (S&P) -- Oakland, 2017.
The Caspar Bowden Award for Outstanding Research in Privacy Enhancing Technologies 2018.
Vincent Bindschaedler, Reza Shokri, and Carl Gunter
➙ Plausible Deniability for Privacy-Preserving Data Synthesis
➙ [code]
VLDB Endowment International Conference on Very Large Data Bases (PVLDB), 2017.
Vincent Bindschaedler and Reza Shokri.
➙ Synthesizing Plausible Privacy-Preserving Location Traces
➙ [code]
➙ [talk by V. Bindschaedler]
IEEE Symposium on Security and Privacy (S&P) -- Oakland, 2016.
Reza Shokri, George Theodorakopoulos, and Carmela Troncoso
➙ Privacy Games along Location Traces: A Game-Theoretic Framework for Optimizing Location Privacy
ACM Transactions on Privacy and Security (TOPS), 2016.
Richard McPherson, Reza Shokri, and Vitaly Shmatikov
➙ Defeating Image Obfuscation with Deep Learning
arXiv:1609.00408, 2016
Media:
The Register,
WIRED,
The Telegraph,
BBC,
and more
Reza Shokri and Vitaly Shmatikov.
➙ Privacy-Preserving Deep Learning
➙ [code]
ACM Conference on Computer and Communications Security (CCS), 2015.
(Invited to) Conference on Communication, Control, and Computing (Allerton), 2015
Federated Learning
Media: MIT Technology Review
Reza Shokri.
➙ Privacy Games: Optimal User-Centric Data Obfuscation
Privacy Enhancing Technologies Symposium (PETS), 2015
Igor Bilogrevic, Kevin Huguenin, Stephan Mihaila, Reza Shokri, and Jean-Pierre Hubaux.
➙ Predicting Users' Motivations behind Location Check-Ins and Utility Implications of Privacy Protection Mechanisms
Network and Distributed System Security (NDSS) Symposium, 2015
Arthur Gervais, Reza Shokri, Adish Singla, Srdjan Capkun, and Vincent Lenders.
➙ Quantifying Web-Search Privacy
➙ [code]
ACM Conference on Computer and Communications Security (CCS), 2014
Reza Shokri, George Theodorakopoulos, Panos Papadimitratos, Ehsan Kazemi, and Jean-Pierre Hubaux.
➙ Hiding in the Mobile Crowd: Location Privacy through Collaboration
IEEE Transactions on Dependable and Secure Computing (TDSC), 2014
Reza Shokri.
➙ Quantifying and Protecting Location Privacy
PhD Thesis, EPFL, 2013
Reza Shokri, George Theodorakopoulos, Carmela Troncoso, Jean-Pierre Hubaux, and Jean-Yves Le Boudec.
➙ Protecting Location Privacy: Optimal Strategy against Localization Attacks
➙ [code]
ACM Conference on Computer and Communications Security (CCS), 2012
Reza Shokri, George Theodorakopoulos, Jean-Yves Le Boudec, and Jean-Pierre Hubaux.
➙ Quantifying Location Privacy
➙ [code]
IEEE Symposium on Security and Privacy (S&P) -- Oakland, 2011
IEEE Security and Privacy (S&P) Test-of-Time Award 2021.
Runner-up for the Outstanding Research Award in Privacy Enhancing Technologies 2012.
➙ CS5562 (Sem 1: 2023-24): Trustworthy Machine Learning (robustness, privacy, and fairness in machine learning)
➙ CS3235 (Sem 1: 2023-24): Computer Security
CS5562 (Sem 1: 2022-23): Trustworthy Machine Learning (robustness, privacy, and fairness in machine learning)
CS3235 (Sem 1: 2022-23): Computer Security
CS5562 (Sem 1: 2021-22): Trustworthy Machine Learning
CS3235 (Sem 1: 2021-22): Computer Security
CS6283 (Sem 1: 2020-21): Topics in Computer Science: Trustworthy Machine Learning
CS3235 (Sem 1: 2020-21): Computer Security
CS3235 (Sem 2: 2019-20): Computer Security (secure channels, software security, OS security, privacy)
CS6231 (Sem 1: 2019-20): Topics in Computer Security: Adversarial Machine Learning (privacy, robustness)
CS4257 (Sem 2: 2018-19): Algorithmic Foundations of Privacy(anonymous communication, data privacy, private computation)
CS6231 (Sem 1: 2018-19): An Adversarial View of Privacy (inference attacks)
CS4257 (Sem 2: 2017-18): Algorithmic Foundations of Privacy (anonymous communication, data privacy, private computation)
CS6101 (Sem 1: 20**): Privacy and Security in Machine Learning (trustworthy machine learning)
Tutorial: Auditing Data Privacy in Machine Learning: A Comprehensive Introduction, ACM CCS, November 2022
Tutorial: Quantitative Reasoning About Data Privacy in Machine Learning (with Chuan Guo), ICML, July 2022
Auditing Data Privacy for Machine Learning, Usenix Enigma Conference, February 2022
Data Privacy and Trustworthy Machine Learning, CVPR Workshop on Responsible Computer Vision, June 2021
Data Privacy in Machine Learning, Google APAC Academic Research Talk Series, April 2021
Modeling Privacy Erosion: Differential Privacy Dynamics in Machine Learning, AAAI Workshop on Privacy-Preserving Artificial Intelligence (PPAI), February 2021
Privacy at the Intersection of Trustworthy Machine Learning NeurIPS Workshop on Privacy Preserving Machine Learning (PriML and PPML Joint Edition), December 2020
NeurIPS Workshop on ML Retrospectives, Surveys & meta-Analyses (ML- RSA), December 2020
Data Privacy in Machine Learning, EMNLP Workshop on Privacy in Natural Language Processing (PrivateNLP), November 2020
In Search of Lost Performance in Privacy-Preserving Deep Learning, ECCV Workshop on The Bright and Dark Sides of Computer Vision: Challenges and Opportunities for Privacy and Security (CV-COPS), August 2020
Trustworthy Machine Learning, IPM Advanced School on Computing: Artificial Intelligence, August 2020
Trustworthy Machine Learning, AI Singapore Summer School, August 2020
Cronus: Robust Knowledge Transfer for Federated Learning, Google Workshop on Federated Learning and Analytics, July 2020
Data Privacy in Machine Learning, Future of Privacy Forum webinar on Privacy Preserving Machine Learning: New Research on Data and Model Privacy, June 2020
[Keynote] International Conference on Information Systems Security (ICISS), India, December 2019
IETF Privacy Enhancements and Assessments Research Group, Singapore, November 2019
AI Singapore Summer School, July 2019
INRIA Grenoble, France, July 2019
INRIA Saclay and LIX, France, July 2019
[Keynote] ACM Workshop on Information Hiding and Multimedia Security (IH&MMSec), Paris, France, July 2019
EPFL Summer Research Institute, Switzerland, June 2019
ForMaL: DigiCosme Spring School on Formal Methods and Machine Learning, ENS Paris-Saclay, France, June 2019