Invited Speakers

We are honored to have the following invited speakers

  • Prof. Matt Fredrikson, Carnegie Mellon University

    Bio:

    Matt Fredrikson's research is directed at understanding fundamental security and privacy issues that lead to failures in real systems. Some of the key outstanding challenges in this area lie in figuring out why promising theoretical approaches oftentimes do not translate into effective defenses. Much of his work is concerned with developing formal analysis techniques that provide insight into the problems that might exist in a system, building countermeasures that give provable guarantees, and measuring the effectiveness of these solutions in real settings. Most of his current research focuses on issues of privacy and data confidentiality. To an even greater extent than with other security issues, our scientific understanding of this area lags far behind the need for rigorous defensive strategies. He believe that in order to reason effectively about privacy in software systems, we need application-specific ways to characterize and limit adversarial uncertainty and inference.
  • Prof. Judy Goldsmith, University of Kentucky

    Bio:

    Dr. Judy Goldsmith received her degrees in Mathematics from Princeton University and the University of Wisconsin-Madison. She held post-doc at Dartmouth College and Boston University, an assistant professorship at the University of Manitoba, and has been in the Computer Science Department of the University of Kentucky since 1993. She is a full professor.

    Goldsmith has been active in the AI community since 1996, and has published heavily cited and award winning papers, including The First Annual IJCAI-JAIR Best Paper Prize, Honorable Mention, 2003, for ``The Computational Complexity of Probabilistic Plan Existence and Evaluation," Journal of AI Research, 1998, and honors for student papers she coauthored at FLAIRS '12 and CGAMES '13. Her research areas include many aspects of decision making, including decision making under uncertainty; computational social choice; preference elicitation, representation, and aggregation; computational learning theory, and computational complexity. She has published numerous articles in the leading AI conferences (e.g., AAMAS, AAAI, AIPS, FLAIRS, IJCAI, ICML, ISAIM, NIPS, and UAI) and journals (e.g., AIJ, AIM, IJAR , JAIR, JACM, JMLR, and TIIS). She was recognized in 2014 as a Senior Member of AAAI, the Association for the Advancement of Artificial Intelligence.

    In 2015, Goldsmith received an Undergraduate Research Mentor award from the Computing Research Association. She has received teaching awards at the department, college, and university level at the University of Kentucky. In 1998, Goldsmith was recognized by the AAAS for her mentoring of members of underrepresented groups in the STEM disciplines. She has helped organize and/or participated in several conferences for women in computing, as well as multiple doctoral consortia at AI conferences.

    Goldsmith has taught classes in recent years on artificial intelligence, theory of computing, discrete math and logic, comparative decision making studies, and "science fiction and computer ethics", and is currently working with Emanuelle Burton, Nicholas Mattei, and Cory Siler on a textbook for that course.

    Goldsmith has been on the editorial board of JAIR since 2008. and on the editorial board of Artificial Intelligence since 2015. She has co-edited special issues of Annals of Mathematics and Artificial Intelligence ('14), International Journal on Approximate Reasoning, and AI Magazine ('08).

    She has been a senior program committee (PC) member for both IJCAI and AAAI. She has been on numerous AAAI, UAI, and ICAPS PCs, helped organize multiple MPREF (Multi-disciplinary Preference Handling) Workshops, UAI Workshops on Bayesian Applications, and has co-organized doctoral consortia (DCs) at ICAPS '08 and IJCAI '11, and has been on DC PCs for other AI conferences, including AAMAS. She has been involved in the interdisciplinary conference, Algorithmic Decision Theory, and is conference chair for Algorthmic Decision Theory '15, which was held in Lexington, KY.

    She has served on multiple NSF panels, and reviewed proposals for national funding agencies for many other countries.

  • Ms. Christine Sim, National University of Singapore

    Bio:

    Christine Sim is a Research Associate at the Centre for International Law. Christine completed her LLM in International Dispute Settlement (MIDS) at the Graduate Institute and the University of Geneva in Switzerland after obtaining her LLB at the National University of Singapore. She specialised in dispute resolution focussing on public international law and private international law, which included investment disputes, the New York Convention, the ICSID Convention, international commercial arbitration, ethics in international arbitration, WTO law and international commercial litigation. Christine’s LLM thesis subject was security for costs in investor-state disputes. Christine is admitted to the Supreme Court of Singapore as an advocate and solicitor. Prior to joining CIL, Christine practised for two years as an associate in the dispute resolution department of an established firm in Singapore, and trained for six months in the international arbitration department of an established international firm in Paris. Christine also undertakes pro-bono criminal work and community legal advice work. Christine’s research at CIL focuses on issues relating to public international law and international dispute resolution. She assists Mr J Christopher Thomas QC on international investment arbitrations.

  • Prof. Toby Walsh, University of New South Wales, Data61, and currently a guest professor at TU Berlin

    Bio:

    Toby Walsh is a leading researcher in Artificial Intelligence. He was recently named in the inaugural Knowledge Nation 100, the one hundred "rock stars" of Australia's digital revolution. He is Guest Professor at TU Berlin, Scientia Professor of Artificial Intelligence at UNSW and leads the Algorithmic Decision Theory group at Data61, Australia's Centre of Excellence for ICT Research. He has been elected a fellow of the Australian Academy of Science, and has won the prestigious Humboldt research award as well as the 2016 NSW Premier's Prize for Excellence in Engineering and ICT. He has previously held research positions in England, Scotland, France, Germany, Italy, Ireland and Sweden. He regularly appears in the media talking about the impact of AI and robotics. In the last year, he has appeared in TV and the radio on the ABC, BBC, Channel 7, Channel 9, Channel 10, CCTV, DW, NPR, RT, SBS, and VOA, as well as on numerous local radio stations. He also writes frequently for print and online media. His work has appeared in the New Scientist, American Scientist, Le Scienze, Cosmos and The Best Writing in Mathematics (Princeton University Press). His twitter account has been voted one of the top ten to follow to keep abreast of developments in AI. He often gives talks at public and trade events like CeBIT, the World Knowledge Forum, TEDx, The Next Big Thing Summit, and PauseFest. He has played a key role at the UN and elsewhere on the campaign to ban lethal autonomous weapons (aka "killer robots").

Program Schedule

July 17th (Tutorial and Seminar)

  • 9:00 - 10:00
    Coffee
  • 10:00 - 12:30
    Judy Goldsmith. Tutorial:"How to Teach Computer Ethics with Science Fiction (joint work with Emanuelle Burton and Nicholas Mattei)". SLIDES.

    A key front for ethical questions in and about computer science is teaching students how to engage with the questions they will face in their professional careers based on the tools and technologies we teach them. In past work (and current teaching) we have advocated for the use of science fiction as an appropriate tool which enables computer science researchers and teachers to engage students and the public on the current state and potential impacts of computer science. We present teaching suggestions for using Ken Liu's short story, "The Here-and-Now" to teach topics in computer ethics. In particular, we use the story to examine ethical issues related to privacy and personhood. To facilitate our discussion, we give a high-level view of common ethical theories and indicate how they inform the questions raised by the story and afford a structure for thinking about how to address them.

  • 12:30 - 14:30
    Lunch Break
  • 14:30 - 15:30
    Judy Goldsmith. Meetup with Female Students and Faculty. SLIDES
    In many countries and cultures, women are a minority of computer scientists. This can affect how we are treated, and how we feel about ourselves as professionals. Dr. Judy Goldsmith will talk about some of the implications for her in North America, both positive and negative. She will talk about how she responded to some of her gender-related challenges, and we will discuss strategies for surviving challenges, celebrating ourselves and our work, and supporting other women.
  • 18:00 - 20:00
    FAT-SG Welcome Reception
    The reception will take place at the patio at the NUS School of Computing, COM1 Basement. Please be aware that only registered participants may enter the reception; do register ahead of time if you plan to arrive.

July 18th

  • 9:00 - 9:30
    Opening Remarks
  • 9:30 - 10:30
    Matt Fredrikson (Invited Talk). "Proxy Use in Data-Driven Systems: Practical Accountability for Privacy and Fairness". SLIDES

    Increasingly, outcomes affecting people’s lives are influenced by systems that are based on personal data. The potential for these data-driven systems to enable "intelligent" applications has generated excitement, but has also been accompanied by legitimate concerns about the threat that they pose to values such as privacy and fairness. Among the primary factors leading to such concerns is the fact that these systems are opaque, meaning that it is difficult to explain their behavior, and in particular why a certain decision was made. Opacity poses a challenge for developers, users, and auditors who wish account for these systems' behavior as it relates to privacy and fairness.

    In this talk, we show that privacy and fairness harms in data-driven systems can often be addressed by accounting for the way in which they make use of individuals' personal information. Central to this approach is a notion of proxy use, which characterizes systems that employ strong predictors of a protected information type, rather than making direct, explicit use of the data in question. We describe analysis-based techniques for identifying and repairing inappropriate instances of proxy use, and show how they can be applied to a broad set of widely-used machine learning algorithms. Using several applications of machine learning on social datasets, we illustrate how these techniques provide a useful form of transparency by isolating and explaining behaviors that amount to violations of privacy and fairness, while giving developers sufficient information to remove these violations without unduly compromising the system's performance.

  • 10:30 - 11:00
    Coffee Break
  • 11:00 - 11:30
    Barnabé Monnot, Francisco Benita and Georgios Piliouras. "How Bad is Selfish Routing in Practice?". SLIDES, PAPER
    Routing games are one of the most successful domains of application of game theory. It is well-understood that simple dynamics converge to equilibria, whose performance is nearly optimal regardless of the size of the network or the number of agents. These strong theoretical assertions prompt a natural question: How well do these pen-and-paper calculations agree with the reality of everyday traffic routing? We focus on a semantically rich dataset from Singapore’s National Science Experiment that captures detailed information about the daily behavior of thousands of Singaporean students. Using this dataset, we can identify the routes as well as the modes of transportation used by the students, e.g. car (driving or being driven to school) versus bus or metro, estimate source and sink destinations (home-school) and trip duration, as well as their mode- dependent available routes. We quantify both the system and individual optimality. With additional data derived from collective real-time traffic information, we estimate an upper bound to the Price of Anarchy of about 1.22. Individually, the typical behavior is consistent from day to day and nearly optimal, with low regret for not deviating to alternative paths.
  • 11:30 - 12:30
    Panel Discussion. "AI and the Law"
    Participants:
    • Yong Jie Khoo, member of NUS Law alt-law interest group
    • Christine Sim, Research Associate (Centre for International Law, National University of Singapore)
    • Gerald Tan, Senior Associate (Digital Business, OC Queen Street LLC - in association with Osborne Clarke)
  • 12:30 - 14:30
    Lunch Break
  • 14:30 - 15:30
    Christine Sim (Invited Talk). "Will AI take over international arbitration?"
    International arbitration is one of the most common ways for international commercial parties to resolve their disputes. International arbitrators are appointed, instead of national judges, for their neutrality and flexibility. Unlike cases in national courts, commercial parties can choose AI to act as their arbitrator. Compared to arbitrators who take months or years to render an award, AI promises to render awards quicker and cheaper These programmes are also capable of independently learning from past cases to produce better awards than human arbitrators. This talk sets out the current options for using AI in arbitration, and explains AI’s limitations and risks.
  • 15:30 - 16:00
    Junzhe Zhang and Elias Barenboim. "Algorithmic Fairness Criteria with Unobserved Confounders".
    A qualitative measure of the effect of a certain input X on an outcome variable Y is essential for social settings where the "fairness" is one of the primary concerns - for instance, a legal dispute over the existence of gender discrimination (X) in hiring (Y). In this paper, we consider a new fairness setting with black-box access to an algorithmic decision-making system while allowing parts of the input variables to be unobserved (called the dishonest defendant setting). We start by studying the necessity and sufficiency of state-of-the-art direct effect measures in classic settings where all inputs to the decision-making system are observed (called the honest defendant). We then show that none of the known criteria is preferable when unobserved confounders (UCs) are present. We then define a new fairness criterion based on counterfactual effects and prove its sufficiency in both honest and dishonest settings.
  • 16:00 - 16:30
    Coffee Break
  • 16:30 - 17:00
    Gunasekeran Dinesh Visva. "Health Tech Innovation and Applied Artificial Intelligence". SLIDES
    I will discuss my practical experience about innovating solutions in the local health tech scene, with a brief overview of relevant existing regulations and future directions in medical ethics, as well as the doctor's perspective of artificial intelligence applications in healthcare.

July 19th

  • 9:00 - 9:30
    Jakub Sliwinski, Martin Strobel and Yair Zick. "A Characterization of Monotone Influence Measures for Data Classification". SLIDES, PAPER
    In this work we focus on the following question: how important was the i-th feature in determining the outcome for a given datapoint? We identify a family of influence measures; functions that, given a datapoint x, assign a value phi_i(x) to every feature i, which roughly corresponds to that i's importance in determining the outcome for x. This family is uniquely derived from a set of axioms: desirable properties that any reasonable influence measure should satisfy. Departing from prior work on influence measures, we assume no knowledge of - or access to - the underlying classifier labelling the dataset. In other words, our influence measures are based on the dataset alone, and do not make any queries to the classifier. While this requirement naturally limits the scope of explanations we provide, we show that it is effective on real datasets.
  • 9:30 - 10:30
    Panel Discussion. "Security and Privacy Issues in AI and Machine-Learning"
    Participants
    • Matt Fredrikson
    • Prateek Saxena
    • Toby Walsh
  • 10:30 - 11:00
    Coffee Break
  • 11:00 - 11:30
    Reza Shokri. "Data Privacy in Machine Learning". SLIDES
    I will talk about what machine learning privacy is, and will discuss how and why machine learning models leak information about the individual data records on which they were trained. My quantitative analysis will be based on the fundamental membership inference attacks: given a data record and (black-box) access to a model, determine if the record was in the model's training set. I will demonstrate how to build such inference attacks on different classification models e.g., trained by commercial "machine learning as a service" providers such as Google and Amazon.
  • 11:30 - 12:30
    Toby Walsh (Invited Talk). "Deceased Organ Matching"
    Thousands of people in Australia are waiting for a donated kidney. Matching donated organ to people on the waiting list is becoming increasingly challenging as road safety improves. In 1989, the mean age of deceased donors was 32 years. In 2014, this had increased to 46 years. We therefore need to match the age of the organ to the age of the patient more carefully. I describe an ongoing project to design mechanisms that do this, taking care at the same time to be fair to different patients, blood and tissue types, and the different states in Australia.
  • 12:30 - 14:00
    Lunch Break
  • 14:00 - 15:00
    Judy Goldsmith (Invited Talk). "What's Hot in AI and Ethics". SLIDES
  • 15:00 - 15:30
    Coffee Break.
  • 15:30 - 16:30
    Panel Discussion. "Societal Issues in AI and Machine Learning"
    Participants:
    • Judy Goldsmith
    • Toby Walsh
    • Harold Soh
  • 16:30 - 17:00
    Concluding Remarks