We are honored to have the following invited speakers
Goldsmith has been active in the AI community since 1996, and has published heavily cited and award winning papers, including The First Annual IJCAI-JAIR Best Paper Prize, Honorable Mention, 2003, for ``The Computational Complexity of Probabilistic Plan Existence and Evaluation," Journal of AI Research, 1998, and honors for student papers she coauthored at FLAIRS '12 and CGAMES '13. Her research areas include many aspects of decision making, including decision making under uncertainty; computational social choice; preference elicitation, representation, and aggregation; computational learning theory, and computational complexity. She has published numerous articles in the leading AI conferences (e.g., AAMAS, AAAI, AIPS, FLAIRS, IJCAI, ICML, ISAIM, NIPS, and UAI) and journals (e.g., AIJ, AIM, IJAR , JAIR, JACM, JMLR, and TIIS). She was recognized in 2014 as a Senior Member of AAAI, the Association for the Advancement of Artificial Intelligence.
In 2015, Goldsmith received an Undergraduate Research Mentor award from the Computing Research Association. She has received teaching awards at the department, college, and university level at the University of Kentucky. In 1998, Goldsmith was recognized by the AAAS for her mentoring of members of underrepresented groups in the STEM disciplines. She has helped organize and/or participated in several conferences for women in computing, as well as multiple doctoral consortia at AI conferences.
Goldsmith has taught classes in recent years on artificial intelligence, theory of computing, discrete math and logic, comparative decision making studies, and "science fiction and computer ethics", and is currently working with Emanuelle Burton, Nicholas Mattei, and Cory Siler on a textbook for that course.
Goldsmith has been on the editorial board of JAIR since 2008. and on the editorial board of Artificial Intelligence since 2015. She has co-edited special issues of Annals of Mathematics and Artificial Intelligence ('14), International Journal on Approximate Reasoning, and AI Magazine ('08).
She has been a senior program committee (PC) member for both IJCAI and AAAI. She has been on numerous AAAI, UAI, and ICAPS PCs, helped organize multiple MPREF (Multi-disciplinary Preference Handling) Workshops, UAI Workshops on Bayesian Applications, and has co-organized doctoral consortia (DCs) at ICAPS '08 and IJCAI '11, and has been on DC PCs for other AI conferences, including AAMAS. She has been involved in the interdisciplinary conference, Algorithmic Decision Theory, and is conference chair for Algorthmic Decision Theory '15, which was held in Lexington, KY.
She has served on multiple NSF panels, and reviewed proposals for national funding agencies for many other countries.
A key front for ethical questions in and about computer science is teaching students how to engage with the questions they will face in their professional careers based on the tools and technologies we teach them. In past work (and current teaching) we have advocated for the use of science fiction as an appropriate tool which enables computer science researchers and teachers to engage students and the public on the current state and potential impacts of computer science. We present teaching suggestions for using Ken Liu's short story, "The Here-and-Now" to teach topics in computer ethics. In particular, we use the story to examine ethical issues related to privacy and personhood. To facilitate our discussion, we give a high-level view of common ethical theories and indicate how they inform the questions raised by the story and afford a structure for thinking about how to address them.
Increasingly, outcomes affecting people’s lives are influenced by systems that are based on personal data. The potential for these data-driven systems to enable "intelligent" applications has generated excitement, but has also been accompanied by legitimate concerns about the threat that they pose to values such as privacy and fairness. Among the primary factors leading to such concerns is the fact that these systems are opaque, meaning that it is difficult to explain their behavior, and in particular why a certain decision was made. Opacity poses a challenge for developers, users, and auditors who wish account for these systems' behavior as it relates to privacy and fairness.
In this talk, we show that privacy and fairness harms in data-driven systems can often be addressed by accounting for the way in which they make use of individuals' personal information. Central to this approach is a notion of proxy use, which characterizes systems that employ strong predictors of a protected information type, rather than making direct, explicit use of the data in question. We describe analysis-based techniques for identifying and repairing inappropriate instances of proxy use, and show how they can be applied to a broad set of widely-used machine learning algorithms. Using several applications of machine learning on social datasets, we illustrate how these techniques provide a useful form of transparency by isolating and explaining behaviors that amount to violations of privacy and fairness, while giving developers sufficient information to remove these violations without unduly compromising the system's performance.