NUS team develops tool that can assess vulnerability of AI systems to attacks

National University of Singapore (NUS) researchers have developed a tool to safeguard against a new form of cyber attack that can recreate the data sets containing personal information used to train artificial intelligence (AI) machines.

The tool, called the Machine Learning (ML) Privacy Meter, has been incorporated into the developer toolkit that Google uses to test the privacy protection features of AI algorithms.

In recent years, hackers have figured out how to reverse-engineer and reconstruct database sets used to train AI systems through an increasingly common kind of attack called a membership inference (MI) attack.

Assistant Professor Reza Shokri, who heads the research team behind ML Privacy Meter, said such attacks involve hackers repeatedly asking the AI system for information, analysing the data for a pattern, and then using the pattern to guess if a data record was used to train the AI system.

Prof Shokri likened MI attacks to thieves probing for weak spots in a house's walls and doors with a needle before breaking in. "But the thief is not going to break in with the needle. Now that he knows (where the weak spots are), he is going to come with a hammer and break the wall," he said.

ML Privacy Meter helps AI developers through a scorecard showing how accurately attackers could recreate the original data sets and suggests techniques to guard against actual MI attacks. The Privacy Meter is the result of three years of work to create an easy-to-use tool which helps programmers see where the weak spots in their algorithms are.

Google started using the tool earlier this year. The tool is open-source, meaning that it can be used for free by other researchers or companies around the world.

"Our main focus was to build an easy-to-use interface for anybody who knows machine learning, but might not know anything about privacy and cyber attacks," said Prof Shokri, who is Iranian by birth and moved to Singapore in 2017.

The NUS research team that developed the Machine Learning Privacy Meter also consists of master's student Mihir Khandekar, 24, doctoral student Chang Hongyan, 24, research assistant Aadyaa Maddi, 22, and doctoral student Rishav Chourasia, 24.

CDO Trends, 18 November 2020
The Straits Times, 10 November 2020
NUS News, 10 November 2020

Trending Posts