CS5562: Trustworthy Machine Learning



Course Information

Instructor: Prof. Reza Shokri
Semester: Fall 2023

Lectures: Fri. 16:00 -- 18:00
Location: Seminar Room 2

Students: Recent Feedback

Course Description

Machine learning is increasingly being used in critical decision-making systems, yet is not reliable in the presence of noisy, biased, and adversarial data. Can we trust machine learning models? This course aims to answer this question, by covering the fundamental aspects of reasoning about trust in machine learning, including its robustness to adversarial data and model manipulations, the privacy risks of machine learning algorithms for sensitive data, the fairness measures for machine learning, and transparency in AI. It covers the algorithms that analyze machine learning vulnerabilities; and techniques for building reliable and trustworthy machine learning algorithms.

We will cover the fundamental concepts in:

  • Robustness in machine learning
  • Privacy in machine learning
  • Algorithmic fairness for machine learning
  • Trustworthy federated learning

Some of the references can be found at https://trustworthy-machine-learning.github.io

Schedule

[Week 01] Course overview; Introduction to trustworthy machine learning

  • Machine learning in presence of adversary
  • Responsible and trustworthy AI

[Week 02] Robustness: Inference in the adversarial setting

  • Machine learning in presence of adversarial examples
  • Methods for generating adversarial examples
  • Definition of robustness

[Week 03] Robustness: Robust inference in the adversarial setting

  • Various (failed) attempts to build robust models
  • Adversarial training
  • Certified robustness using randomized smoothing

[Week 04] Robustness: Learning in the adversarial setting

  • Introduction to data poisoning attacks
  • Influence of training data on models
  • Methods for designing poisoning data
  • Backdoor attacks

[Week 05] Privacy: Introduction to anonymity and data privacy

  • Data anonymization
  • Data de-anonymization
  • Introduction to inference attacks

[Week 06] Privacy: Inference attacks

  • Membership inference attacks
  • Reconstruction attack
  • Defining privacy under inference attacks

[Week 07] Privacy: Quantitative reasoning about data privacy in machine learning

  • Membership inference attacks in machine learning
  • How to design powerful inference attacks
  • Introduction to differential privacy

[Week 08] Privacy: Differentially private machine learning

  • Differential privacy
  • Methods and mechanisms for preserving differential privacy
  • Differentially private SGD

[Week 09] Fairness: Bias in machine learning

  • Sources and types of bias in machine learning
  • Formal definitions of fairness

[Week 10] Fairness: Satisfying fairness criteria in machine learning

  • Methods for achieving group fairness
  • Limitations and tradeoffs
  • Open issues and advanced topics

[Week 11] Federated learning: Privacy

  • Learning without sharing data
  • Privacy risks of federated learning
  • Federated learning with differential privacy

[Week 12] Federated learning: Robustness and Fairness

  • Poisoning attacks in federated learning
  • Robust federated learning
  • Bias and fairness in federated learning