Assistant Professor Yair Zick: Ethics in Artificial Intelligence

20 February 2018
SHARE THIS ARTICLE

20 February 2018 – With the proliferation of Artificial Intelligence (AI) technology and major tech companies facing ethical dilemmas, NUS Computing Assistant Professor Yair Zick shares his thoughts on computational fairness and ethics in AI.

In a recent Straits Times article published on 18 February, it was reported that there is a growing trend of universities offering ethics courses to their Computer Science students. American universities like Harvard University and Massachusetts Institute of Technology (MIT) have started offering courses on ethics and regulation of AI.

These courses aim to train a new generation of technologists and policymakers to consider the ramifications of technology. With the increasing popularity of technology like machine learning, where computer algorithms autonomously learn tasks by analysing data, these advancements have the potential to help or harm people.

The spotlight on ethics also emerge at a time when big tech firms have been struggling to handle the side effects of their technology – fake news on Facebook, fake followers on Twitter.

Is technology fair? How do you make sure data is not biased? Should machines be judging humans?

Singapore is rapidly moving towards a data-driven society, pushing forward national initiatives such as AI Singapore and the Smart Nation; this is in addition to considerable financial backing for AI research and development, industry and academia. This major push will result in a proliferation of AI technologies to high-impact domains, like healthcare, transport and financial sectors.

As we deploy such systems, it is easy to get caught up in the hype. Data-driven machine learning can offer extremely effective and flexible solutions to problems that seemed unsolvable only a short time ago.

As data-driven decision makers grow more complex, they become less interpretable. It is often very difficult to understand why a predictive algorithm arrived at the decision it did.

This is often due to the fact that these algorithms are often ‘black boxes’. Their internal workings are hidden from us and their internal processes are often so complex that their own designers would be hard-pressed to explain their behaviour on data.

Algorithmic opacity is problematic for several reasons.

First, it makes it very difficult to make algorithms, or their designers, accountable to lawmakers, experts and to the general public. Secondly, when they do get things wrong, it is very difficult to account for what exactly happened. Indeed, bad algorithmic behaviour can go undetected for a significant amount of time. Lastly, all these make it very difficult to decide whether these algorithms are treating users in a fair, unbiased manner.

Our group is currently studying methods for ensuring algorithmic transparency in various domains, as well as methods to ensure the fair allocation of limited resources in various domains.

I am not alone.

Several other researchers at the NUS Computing are involved in creating AI for the Social Good: ensuring that we build secure and private systems, creating machines that interact well with humans, designing explainable AI and methods for interpreting data classification and much much more. We live in an exciting time!

Questions at the intersection of AI and ethics often turn out to be tricky. I personally do not believe that there is a single correct answer, especially in the fast-evolving Singapore AI landscape.

I do strongly believe that it is our duty as researchers to keep the public and its representatives informed, not only about the benefits, but also about the potential risks of using AI technologies. I do hope to be a part of this conversation in Singapore.

A/P Zick’s research interests include computational fair division and computational social choice, algorithmic game theory and algorithmic transparency.

Trending Posts