My research is in data privacy and trustworthy machine learning. I am interested in designing algorithms to quantitatively measure the privacy risks of data processing algorithms, and build scalable algorithms for generalizable machine learning models that are privacy-preserving, robust, interpretable, and fair. We work on understanding the trade-offs between different pillars of trust in machine learning for practical scenarios, and on resolving such conflicts with mathematical rigor. I have recieved the NUS Presidential Young Professorship, for working on privacy in machine learning, the NUS Early Career Research Award, for working on trustworthy machine learning for high-dimensional models, the AI Singapore research award (with Yair Zick), for working on privacy-aware transparency for machine learning, and AI Singapore research award (with Li Shiuan Peh), for working on efficient and secure collaborative machine learning. I am a member of Intel's Private AI Collaborative Research Institute, for working on privacy and security of decentralized (federated) learning.
NEWS ➙ I am teaching a graduate course on Trustworthy Machine Learning. See trustworthy-ml.com for the list of research papers that we cover in the course (list curated jointly with Nicolas Papernot).
NEWS ➙ We have released our ML Privacy Meter tool, which enables quantifying the privacy risks of machine learning models. Here is our short article on Aiding Regulatory Compliance by Quantifying the Privacy Risks of Machine Learning, and its talk.
➙ I have open positions for PhD students and postdoctoral researchers. Please send me your CV and research statement.
Data Privacy in Machine Learning, Future of Privacy Forum webinar on Privacy Preserving Machine Learning: New Research on Data and Model Privacy, June 2020
Cronus: Robust Knowledge Transfer for Federated Learning, Google Workshop on Federated Learning and Analytics, July 2020
Trustworthy Machine Learning, AI Singapore Summer School, August 2020
Trustworthy Machine Learning, IPM Advanced School on Computing: Artificial Intelligence, August 2020
In Search of Lost Performance in Privacy-Preserving Deep Learning, ECCV Workshop on The Bright and Dark Sides of Computer Vision: Challenges and Opportunities for Privacy and Security (CV-COPS), August 2020
Data Privacy in Machine Learning, EMNLP Workshop on Privacy in Natural Language Processing (PrivateNLP), November 2020
NeurIPS Workshop on ML Retrospectives, Surveys & meta-Analyses (ML- RSA), December 2020
NeurIPS Workshop on Privacy Preserving Machine Learning (PriML and PPML Joint Edition), December 2020
AAAI Workshop on Privacy-Preserving Artificial Intelligence (PPAI), February 2021
Hongyan Chang, and Reza Shokri
➙ On the Privacy Risks of Algorithmic Fairness
arXiv:2011.03731, 2020.
Anshul Aggarwal, Trevor Carlson, Reza Shokri, and Shruti Tople
➙ SOTERIA: In Search of Efficient Neural Networks for Private Inference
arXiv:2007.12934, 2020.
Milad Nasr, Reza Shokri, and Amir Houmansadr
➙ Improving Deep Learning with Differential Privacy using Gradient Encoding and Denoising
arXiv:2007.11524, 2020.
Hongyan Chang, Ta Duy Nguyen, Sasi Kumar Murakonda, Ehsan Kazemi, and Reza Shokri
➙ On Adversarial Bias and the Robustness of Fair Machine Learning
arXiv:2006.08669, 2020.
Neel Patel, Reza Shokri, and Yair Zick
➙ Model Explanations with Differential Privacy
arXiv:2006.09129, 2020.
Hongyan Chang, Virat Shejwalkar, Reza Shokri, and Amir Houmansadr
➙ Cronus: Robust and Heterogeneous Collaborative Learning with Black-Box Knowledge Transfer
arXiv:1912.11279, 2019
Reza Shokri, Martin Strobel, and Yair Zick
➙ On the Privacy Risks of Model Explanations
arXiv:1907.00164, 2019
Media:
Harvard Business Review
Ni Trieu, Kareem Shehata, Prateek Saxena, Reza Shokri, and Dawn Song
➙ Epione: Lightweight Contact Tracing with Strong Privacy
arXiv:2004.13293, 2020.
Te Juin Lester Tan, and Reza Shokri
➙ Bypassing Backdoor Detection Algorithms in Deep Learning
IEEE European Symposium on Security and Privacy (EuroSP), 2020.
Congzheng Song, and Reza Shokri
➙ Robust Membership Encoding: Inference Attacks and Copyright Protection for Deep Learning
ACM ASIA Conference on Computer and Communications Security (ASIACCS), 2020.
Liwei Song, Reza Shokri, and Prateek Mittal
➙ Privacy Risks of Securing Machine Learning Models against Adversarial Examples
➙ [talk by L. Song]
ACM Conference on Computer and Communications Security (CCS), 2019.
Milad Nasr, Reza Shokri, and Amir Houmansadr
➙ Comprehensive Privacy Analysis of Deep Learning: Passive and Active White-box Inference Attacks against Centralized and Federated Learning
➙ [code]
➙ [talk by M. Nasr]
IEEE Symposium on Security and Privacy (S&P) -- Oakland, 2019.
Sasi Kumar Murakonda, Reza Shokri, and George Theodorakopoulos
➙ Ultimate Power of Inference Attacks: Privacy Risks of Learning High-Dimensional Graphical Models
arXiv:1905.12774, 2019
Milad Nasr, Reza Shokri, and Amir Houmansadr
➙ Machine Learning with Membership Privacy using Adversarial Regularization
➙ [code]
➙ [talk by A. Houmansadr]
ACM Conference on Computer and Communications Security (CCS), 2018.
Tyler Hunt, Congzheng Song, Reza Shokri, Vitaly Shmatikov, and Emmett Witchel
➙ Chiron: Privacy-preserving Machine Learning as a Service
arXiv:1803.05961, 2018
Media:
ZDNet
Reza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov
➙ Membership Inference Attacks against Machine Learning Models
➙ [code]
➙ [tool]
➙ [datasets]
➙ [talk]
IEEE Symposium on Security and Privacy (S&P) -- Oakland, 2017.
The Caspar Bowden Award for Outstanding Research in Privacy Enhancing Technologies 2018.
Vincent Bindschaedler, Reza Shokri, and Carl Gunter
➙ Plausible Deniability for Privacy-Preserving Data Synthesis
➙ [code]
VLDB Endowment International Conference on Very Large Data Bases (PVLDB), 2017.
Vincent Bindschaedler and Reza Shokri.
➙ Synthesizing Plausible Privacy-Preserving Location Traces
➙ [code]
➙ [talk by V. Bindschaedler]
IEEE Symposium on Security and Privacy (S&P) -- Oakland, 2016.
Reza Shokri, George Theodorakopoulos, and Carmela Troncoso
➙ Privacy Games along Location Traces: A Game-Theoretic Framework for Optimizing Location Privacy
ACM Transactions on Privacy and Security (TOPS), 2016.
Richard McPherson, Reza Shokri, and Vitaly Shmatikov
➙ Defeating Image Obfuscation with Deep Learning
arXiv:1609.00408, 2016
Media:
The Register,
WIRED,
The Telegraph,
BBC,
and more
Reza Shokri and Vitaly Shmatikov.
➙ Privacy-Preserving Deep Learning
➙ [code]
ACM Conference on Computer and Communications Security (CCS), 2015.
(Invited to) Conference on Communication, Control, and Computing (Allerton), 2015.
Federated Learning
Media: MIT Technology Review
Reza Shokri.
➙ Privacy Games: Optimal User-Centric Data Obfuscation
Privacy Enhancing Technologies Symposium (PETS), 2015
Igor Bilogrevic, Kevin Huguenin, Stephan Mihaila, Reza Shokri, and Jean-Pierre Hubaux.
➙ Predicting Users' Motivations behind Location Check-Ins and Utility Implications of Privacy Protection Mechanisms
Network and Distributed System Security (NDSS) Symposium, 2015.
Arthur Gervais, Reza Shokri, Adish Singla, Srdjan Capkun, and Vincent Lenders.
➙ Quantifying Web-Search Privacy
➙ [code]
ACM Conference on Computer and Communications Security (CCS), 2014.
Reza Shokri, George Theodorakopoulos, Panos Papadimitratos, Ehsan Kazemi, and Jean-Pierre Hubaux.
➙ Hiding in the Mobile Crowd: Location Privacy through Collaboration
IEEE Transactions on Dependable and Secure Computing (TDSC), 2014.
Reza Shokri, George Theodorakopoulos, Carmela Troncoso, Jean-Pierre Hubaux, and Jean-Yves Le Boudec.
➙ Protecting Location Privacy: Optimal Strategy against Localization Attacks
➙ [code]
ACM Conference on Computer and Communications Security (CCS), 2012.
Reza Shokri, George Theodorakopoulos, Jean-Yves Le Boudec, and Jean-Pierre Hubaux.
➙ Quantifying Location Privacy
➙ [code]
IEEE Symposium on Security and Privacy (S&P) -- Oakland, 2011.
Runner-up for the Outstanding Research Award in Privacy Enhancing Technologies 2012.
➙ CS6283 (Sem 1: 2020-21): Topics in Computer Science: Trustworthy Machine Learning
➙ CS3235 (Sem 1: 2020-21): Computer Security
CS3235 (Sem 2: 2019-20): Computer Security (secure channels, software security, OS security, privacy)
CS6231 (Sem 1: 2019-20): Topics in Computer Security: Adversarial Machine Learning (privacy, robustness)
CS4257 (Sem 2: 2018-19): Algorithmic Foundations of Privacy (anonymous communication, data privacy, private computation)
CS6231 (Sem 1: 2018-19): An Adversarial View of Privacy (inference attacks)
CS4257 (Sem 2: 2017-18): Algorithmic Foundations of Privacy (anonymous communication, data privacy, private computation)
CS6101 (Sem 1: 20**): Privacy and Security in Machine Learning (trustworthy machine learning)
ForMaL: DigiCosme Spring School on Formal Methods and Machine Learning, ENS Paris-Saclay, France, June 2019
EPFL Summer Research Institute, Switzerland, June 2019
INRIA Saclay and LIX, France, July 2019
Keynote ACM Workshop on Information Hiding and Multimedia Security (IH&MMSec), Paris, France, July 2019
INRIA Grenoble, France, July 2019
AI Singapore Summer School, July 2019
IETF Privacy Enhancements and Assessments Research Group, Singapore, November 2019
Keynote International Conference on Information Systems Security (ICISS), India, December 2019