As large-scale decision making algorithms grow in ubiquity, scale and complexity, there have been growing concerns about their potential for undesirable, indeed, unethical, behavior. Recent studies have shown that big data algorithms can inadvertently discriminate against protected minorities, or treat users unfairly by (often unconscious) design choices.
Amongst calls for transparency, accountability and fairness in machine learning, vested parties have responded in a number of ways. Governments are currently pushing new legislation initiatives; AI and computing consortia are proposing ethics codes of conduct; and industry members are joining forces with lawmakers and non-profits, ensuring the ethical and beneficial use of AI technologies.
The research community was quick to respond, resulting in a flurry of research papers and think-tanks discussing the transparent, ethical use of data technologies.
FAT-SG will discuss recent developments in this rapidly developing landscape. The workshop is especially pertinent in the Singaporean context, given the Singapore government’s commitment to AI and data science as a means for a better society.
Please register for FAT-SG here.