Reza SHOKRI


NUS Presidential Young Professor

Assistant Professor
CS Department, School of Computing
National University of Singapore (NUS)
Data Privacy and Trustworthy Machine Learning Research Lab




Email: firstname@comp.nus.edu.sg
Twitter: @rzshokri
Phone: +65-651-64464
Office: COM2-03-60
Mailing Address: Dept. of Computer Science,
NUS School of Computing, 13 Computing Drive,
Computing 1, #03-27, Singapore 117417.

My research is in data privacy and trustworthy machine learning. I am interested in designing methods to quantitatively measure the privacy risks of data processing algorithms, and build scalable schemes for generalizable machine learning models that are also privacy-preserving, robust, interpretable, and fair. Our research is on analyzing the trade-offs between different pillars of trust in machine learning for practical scenarios, and on resolving such conflicts with rigorous mathematical guarantees. We are currently working on many interesting problems in this domain, including trustworthy federated learning, differential privacy for machine learning, fairness versus privacy in machine learning, privacy-aware model explanations, privacy-preserving data synthesis, and quantifying privacy risks of data analytics. Our research is supported by research awards and grants from Intel, Google, Facebook, VMWare, NEC, Huawei, AI Singapore, NUS, Singapore MoE, and NRF.

I have open positions for PhD students and postdoctoral researchers. Please send me your CV and research statement.

Selected Research Papers (see also Google Scholar and arXiv)

Jiayuan Ye and Reza Shokri
Differentially Private Learning Needs Hidden State (Or Much Faster Convergence)
Conference on Neural Information Processing Systems (NeurIPS), 2022
Also presented at the Symposium on Foundations of Responsible Computing (FORC), 2022

Jiayuan Ye, Aadyaa Maddi, Sasi Kumar Murakonda, Vincent Bindschaedler, and Reza Shokri
Enhanced Membership Inference Attacks against Machine Learning Models [code]
ACM Conference on Computer and Communications Security (CCS), 2022

Florian Tramèr, Reza Shokri, Ayrton San Joaquin, Hoang Le, Matthew Jagielski, Sanghyun Hong, and Nicholas Carlini
Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets
ACM Conference on Computer and Communications Security (CCS), 2022
Media: The Register, TechXplore

Hannah Brown, Katherine Lee, Fatemehsadat Mireshghallah, Reza Shokri, and Florian Tramer
What Does it Mean for a Language Model to Preserve Privacy?
ACM Conference on Fairness, Accountability, and Transparency (FAccT), 2022
Media: MIT Technology Review

Neel Patel, Reza Shokri, and Yair Zick
Model Explanations with Differential Privacy
ACM Conference on Fairness, Accountability, and Transparency (FAccT), 2022

Fatemehsadat Mireshghallah, Kartik Goyal, Archit Uniyal, Taylor Berg-Kirkpatrick, and Reza Shokri
Quantifying Privacy Risks of Masked Language Models Using Membership Inference Attacks [code]
arXiv:2203.03929, 2022


Rishav Chourasia*, Jiayuan Ye*, and Reza Shokri
Differential Privacy Dynamics of Langevin Diffusion and Noisy Gradient Descent [talk by Jiayuan Ye]
Spotlight Conference on Neural Information Processing Systems (NeurIPS), 2021

Hongyan Chang, and Reza Shokri
On the Privacy Risks of Algorithmic Fairness
IEEE European Symposium on Security and Privacy (EuroSP), 2021
Also presented at FTC PrivacyCon, 2021

Reza Shokri, Martin Strobel, and Yair Zick
On the Privacy Risks of Model Explanations
AAAI/ACM Conference on AI, Ethics, and Society (AIES), 2021
Also presented at FTC PrivacyCon, 2021
Media: Harvard Business Review

Sasi Kumar Murakonda, Reza Shokri, and George Theodorakopoulos
Quantifying the Privacy Risks of Learning High-Dimensional Graphical Models
International Conference on Artificial Intelligence and Statistics (AISTATS), 2021


Hongyan Chang, Ta Duy Nguyen, Sasi Kumar Murakonda, Ehsan Kazemi, and Reza Shokri
On Adversarial Bias and the Robustness of Fair Machine Learning
arXiv:2006.08669, 2020

Te Juin Lester Tan, and Reza Shokri
Bypassing Backdoor Detection Algorithms in Deep Learning
IEEE European Symposium on Security and Privacy (EuroSP), 2020

Congzheng Song, and Reza Shokri
Robust Membership Encoding: Inference Attacks and Copyright Protection for Deep Learning
ACM ASIA Conference on Computer and Communications Security (ASIACCS), 2020

Anshul Aggarwal, Trevor Carlson, Reza Shokri, and Shruti Tople
SOTERIA: In Search of Efficient Neural Networks for Private Inference
arXiv:2007.12934, 2020


Liwei Song, Reza Shokri, and Prateek Mittal
Privacy Risks of Securing Machine Learning Models against Adversarial Examples [talk by L. Song]
ACM Conference on Computer and Communications Security (CCS), 2019

Milad Nasr, Reza Shokri, and Amir Houmansadr
Comprehensive Privacy Analysis of Deep Learning: Passive and Active White-box Inference Attacks against Centralized and Federated Learning [code] [talk by M. Nasr]
IEEE Symposium on Security and Privacy (S&P) -- Oakland, 2019

Hongyan Chang, Virat Shejwalkar, Reza Shokri, and Amir Houmansadr
Cronus: Robust and Heterogeneous Collaborative Learning with Black-Box Knowledge Transfer
arXiv:1912.11279, 2019


Milad Nasr, Reza Shokri, and Amir Houmansadr
Machine Learning with Membership Privacy using Adversarial Regularization [code] [talk by A. Houmansadr]
ACM Conference on Computer and Communications Security (CCS), 2018.

Tyler Hunt, Congzheng Song, Reza Shokri, Vitaly Shmatikov, and Emmett Witchel
Chiron: Privacy-preserving Machine Learning as a Service
arXiv:1803.05961, 2018
Media: ZDNet


Reza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov
Membership Inference Attacks against Machine Learning Models [code] [tool] [datasets] [talk]
IEEE Symposium on Security and Privacy (S&P) -- Oakland, 2017.
The Caspar Bowden Award for Outstanding Research in Privacy Enhancing Technologies 2018.

Vincent Bindschaedler, Reza Shokri, and Carl Gunter
Plausible Deniability for Privacy-Preserving Data Synthesis [code]
VLDB Endowment International Conference on Very Large Data Bases (PVLDB), 2017.


Vincent Bindschaedler and Reza Shokri.
Synthesizing Plausible Privacy-Preserving Location Traces [code] [talk by V. Bindschaedler]
IEEE Symposium on Security and Privacy (S&P) -- Oakland, 2016.

Reza Shokri, George Theodorakopoulos, and Carmela Troncoso
Privacy Games along Location Traces: A Game-Theoretic Framework for Optimizing Location Privacy
ACM Transactions on Privacy and Security (TOPS), 2016.

Richard McPherson, Reza Shokri, and Vitaly Shmatikov
Defeating Image Obfuscation with Deep Learning
arXiv:1609.00408, 2016
Media: The Register, WIRED, The Telegraph, BBC, and more


Reza Shokri and Vitaly Shmatikov.
Privacy-Preserving Deep Learning [code]
ACM Conference on Computer and Communications Security (CCS), 2015.
(Invited to) Conference on Communication, Control, and Computing (Allerton), 2015
Federated Learning
Media: MIT Technology Review

Reza Shokri.
Privacy Games: Optimal User-Centric Data Obfuscation
Privacy Enhancing Technologies Symposium (PETS), 2015

Igor Bilogrevic, Kevin Huguenin, Stephan Mihaila, Reza Shokri, and Jean-Pierre Hubaux.
Predicting Users' Motivations behind Location Check-Ins and Utility Implications of Privacy Protection Mechanisms
Network and Distributed System Security (NDSS) Symposium, 2015

Arthur Gervais, Reza Shokri, Adish Singla, Srdjan Capkun, and Vincent Lenders.
Quantifying Web-Search Privacy [code]
ACM Conference on Computer and Communications Security (CCS), 2014

Reza Shokri, George Theodorakopoulos, Panos Papadimitratos, Ehsan Kazemi, and Jean-Pierre Hubaux.
Hiding in the Mobile Crowd: Location Privacy through Collaboration
IEEE Transactions on Dependable and Secure Computing (TDSC), 2014

Reza Shokri.
Quantifying and Protecting Location Privacy
PhD Thesis, EPFL, 2013

Reza Shokri, George Theodorakopoulos, Carmela Troncoso, Jean-Pierre Hubaux, and Jean-Yves Le Boudec.
Protecting Location Privacy: Optimal Strategy against Localization Attacks [code]
ACM Conference on Computer and Communications Security (CCS), 2012

Reza Shokri, George Theodorakopoulos, Jean-Yves Le Boudec, and Jean-Pierre Hubaux.
Quantifying Location Privacy [code]
IEEE Symposium on Security and Privacy (S&P) -- Oakland, 2011
IEEE Security and Privacy (S&P) Test-of-Time Award 2021.
Runner-up for the Outstanding Research Award in Privacy Enhancing Technologies 2012.

Teaching

CS5562 (Sem 1: 2022-23): Trustworthy Machine Learning (robustness, privacy, and fairness in machine learning)

CS3235 (Sem 1: 2022-23): Computer Security

CS5562 (Sem 1: 2021-22): Trustworthy Machine Learning

CS3235 (Sem 1: 2021-22): Computer Security

CS6283 (Sem 1: 2020-21): Topics in Computer Science: Trustworthy Machine Learning

CS3235 (Sem 1: 2020-21): Computer Security

CS3235 (Sem 2: 2019-20): Computer Security (secure channels, software security, OS security, privacy)

CS6231 (Sem 1: 2019-20): Topics in Computer Security: Adversarial Machine Learning (privacy, robustness)

CS4257 (Sem 2: 2018-19): Algorithmic Foundations of Privacy (anonymous communication, data privacy, private computation)

CS6231 (Sem 1: 2018-19): An Adversarial View of Privacy (inference attacks)

CS4257 (Sem 2: 2017-18): Algorithmic Foundations of Privacy (anonymous communication, data privacy, private computation)

CS6101 (Sem 1: 20**): Privacy and Security in Machine Learning (trustworthy machine learning)

Professional Activities

Organizer: NUS Computer Science Research Week: 2019, 2020, 2021, 2022
Co-organizer: ICLR Workshop Distributed and Private Machine Learning (DPML): 2021

Award committee member
  • The Caspar Bowden Award for Outstanding Research in Privacy Enhancing Technologies: 2015, 2016, 2019, 2021, 2022
  • CNIL-Inria Privacy Award: 2021
Program committee member
  • co-chair: Shadow PC of IEEE Symposium on Security and Privacy (SP): 2021
  • co-chair: Hot Topics in Privacy Enhancing Technologies (HotPETs): 2013 and 2014
  • IEEE Symposium on Security and Privacy (Oakland): 2019, 2020, 2021, 2023
  • ACM Conference on Computer and Communications Security (CCS): 2017, 2019, 2020, 2021, 2022
  • ACM Conference on Fairness, Accountability, and Transparency (FAccT): 2022
  • ACM CCS Workshop on Privacy-Preserving Machine Learning (PPML): 2021
  • Deep Learning and Security workshop (DLS): 2020, 2021
  • AAAI Workshop on Privacy-Preserving Artificial Intelligence (PPAI): 2020, 2021
  • Privacy-Enhancing Technologies Symposium (PETS): 2013, 2014, 2015, 2017, 2019, 2020
  • ACM ASIA Conference on Computer and Communications Security (ASIACCS): 2019, 2020
  • ACM CCS Workshop on Theory and Practice of Differential Privacy (TPDP): 2018, 2019
  • USENIX Security and AI Networking Conference: 2019
  • USENIX Security Symposium: 2015, 2016
  • Network and Distributed System Security Symposium (NDSS): 2016, 2017
  • IEEE European Symposium on Security and Privacy (Euro S&P): 2017
  • ACM Conference on Security and Privacy in Wireless and Mobile Networks (WiSec): 2014, 2015, 2016, 2018
  • Conference on Decision and Game Theory for Security (GameSec): 2015, 2016, 2018
  • International World Wide Web Conference (WWW): 2016
  • ACM Workshop on Privacy in the Electronic Society (WPES): 2012, 2015
  • ASIACCS Workshop on IoT Privacy, Trust, and Security (IoTPTS): 2015, 2016
  • Workshop on Understanding and Enhancing Online Privacy (UEOP): 2016
  • International Workshop on Obfuscation: Science, Technology, and Theory: 2017
  • International Conference on Privacy, Security and Trust (PST): 2014

Invited Talks and Visits

Data Privacy and Trustworthy Machine Learning, CVPR Workshop on Responsible Computer Vision, June 2021

Data Privacy in Machine Learning, Google APAC Academic Research Talk Series, April 2021

Modeling Privacy Erosion: Differential Privacy Dynamics in Machine Learning, AAAI Workshop on Privacy-Preserving Artificial Intelligence (PPAI), February 2021

Privacy at the Intersection of Trustworthy Machine Learning NeurIPS Workshop on Privacy Preserving Machine Learning (PriML and PPML Joint Edition), December 2020

NeurIPS Workshop on ML Retrospectives, Surveys & meta-Analyses (ML- RSA), December 2020

Data Privacy in Machine Learning, EMNLP Workshop on Privacy in Natural Language Processing (PrivateNLP), November 2020

In Search of Lost Performance in Privacy-Preserving Deep Learning, ECCV Workshop on The Bright and Dark Sides of Computer Vision: Challenges and Opportunities for Privacy and Security (CV-COPS), August 2020

Trustworthy Machine Learning, IPM Advanced School on Computing: Artificial Intelligence, August 2020

Trustworthy Machine Learning, AI Singapore Summer School, August 2020

Cronus: Robust Knowledge Transfer for Federated Learning, Google Workshop on Federated Learning and Analytics, July 2020

Data Privacy in Machine Learning, Future of Privacy Forum webinar on Privacy Preserving Machine Learning: New Research on Data and Model Privacy, June 2020

[Keynote] International Conference on Information Systems Security (ICISS), India, December 2019

IETF Privacy Enhancements and Assessments Research Group, Singapore, November 2019

AI Singapore Summer School, July 2019

INRIA Grenoble, France, July 2019

INRIA Saclay and LIX, France, July 2019

[Keynote] ACM Workshop on Information Hiding and Multimedia Security (IH&MMSec), Paris, France, July 2019

EPFL Summer Research Institute, Switzerland, June 2019

ForMaL: DigiCosme Spring School on Formal Methods and Machine Learning, ENS Paris-Saclay, France, June 2019

Researchers (Alumni)

Hongyan Chang
(PhD Student)
Rishav Chourasia
(PhD Student)
Martin Strobel
(PhD Student)
Jiashu Tao
(PhD Student)
Jiayuan Ye
(PhD Student)
Zitai Chen
(PhD Student)
Haoxing Lin
(PhD Student)
Hannah Brown
(PhD Student)
Victor Masiak
(Master's Student)
Philippe Liu
(Master's Student)
Prakhar Ganesh
(Master's Student)