Let’s maximise influence, but in a fair way

23 January 2020
SHARE THIS ARTICLE

A few years ago, Yair Zick was attending a conference in Stockholm when he struck up a conversation with two researchers from the University of Southern California (USC). Zick, a computer scientist from NUS Computing, was investigating how the concepts of fairness and diversity could be applied to allocating public housing flats in Singapore. The USC researchers, Bryan Wilder and Milind Tambe, were interested in Zick’s work because they were trying to solve a resource allocation problem of their own.

Then based at the USC Center for Artificial Intelligence in Society, Wilder and Tambe were looking into how to raise HIV/AIDS awareness among homeless youth in Los Angeles. It is a sector of society that is particularly vulnerable to infection — homeless youth aged 25 and under are 20 times more likely to be HIV positive compared with stably housed youth, with reported HIV prevalence rates as high as 11.5 percent.

Key to lowering transmission rates is reducing high-risk HIV behaviour many homeless youth often engage in, such as sharing drug needles and having unprotected sex. The Center’s researchers were aware of this, but they were also keenly aware that the homeless drop-in centres they were working with had limited resources to tackle the problem.

“Ideally, you would want to reach every single person by yourself and send them to a programme,” says Zick, who began collaborating with his USC counterparts on the issue. “But the problem is that you don’t have enough social workers to go around to everybody, so instead you try to identify key members of the community to talk to them.”

The idea is simple: select and train a group of homeless youth to become leaders to their peers and spread HIV awareness to influence behaviour change. The execution is less so, because identifying the most “influential” youths in a social network is a rather complex task.

In academic speak, it is known as the influence maximisation problem. “It’s viral marketing, the same stuff that drives product placement on Facebook advertisements,” explains research fellow Alan Tsang, who works with Zick to study social networks and social choice.

“The goal is to find people who are well-connected in these networks so you can place your product with them and hopefully they will influence large parts of the network through word of mouth,” he says. “But instead of having a product, we’re trying to spread public health messages like HIV awareness.”

Essentially, the influence maximisation problem is about picking out the best people in the network to spread your desired message in the quickest and most efficient manner, says Tsang. “It’s a mathematical formulation.”

Zick elaborates. Imagine individuals in a social network as nodes and their connections to others as lines emanating from these nodes, he says. “Each connection has a number which is the probability of one person influencing another.”

A new twist to an old problem

Studying how to maximise influence within a social network isn’t new — it has been looked into for close to two decades now. But the problem could use some revising, says Zick.

“I mean great, you’ve managed to get a good spread throughout the network, but if you’re completely skipping members of a certain racial minority, gender group, sexual orientation, and so on, then the outreach isn’t fair,” he says.

It’s important to ensure that the limited resources society has are allocated in a manner that reflects and respects the diverse composition of communities. You want to spread HIV awareness to as many homeless youths as possible, but at the same time you want to make sure that those who receive the message aren’t just from one particular segment in society.

Traditional approaches to the influence maximisation problem have so far failed to address the issue of group fairness head-on. “It just wasn’t something people thought of,” says Zick.

“So the big question is how can we select key figures in a population to make sure that all groups in the network are treated fairly?” he says.

Together with their USC colleagues, Zick and Tsang began searching for a solution in the summer of 2018. The answer, they reasoned, lies in measuring the diversity constraint of the group in question. Or in other words, how much influence or outreach a particular sector — for example, black homeless youth from the LGBTQ community — within the society should receive.

To determine the diversity constraint, the researchers consider two factors: the size of a group and how well-connected it is. “You want each group to receive a proportional share of the influence that’s being given out to them,” says Tsang. The larger the group, the more influence it should technically receive. But if it’s sparsely connected, then it may be guaranteed less influence than well-connected group, because they would not be able to allocate resources effectively if left to their own devices.

“That’s how we measure fairness in this context,” says Zick. His team also considered how individuals can belong to more than one group in society. For example, a homeless youth might identify as being female, Latino, and homosexual.

An algorithm that allocates fairly

Using diversity constraint as a basis, Zick’s team designed an algorithm to allocate resources fairly within a social network. They tested their approach using data from four, real-world social networks of homeless youth in Los Angeles, measuring the spread of HIV awareness. Each network comprised between 60 to 70 individuals, classified into groups based on their birth sex, gender identity, race, and sexual orientation.

The results, reported at last year’s International Joint Conference on Artificial Intelligence, demonstrated that the team’s diversity constraint-based algorithm reduced unfair allocations by up to 65 percent, compared to the current best influence maximisation algorithm. Furthermore, it did so at a relatively small cost to overall utility, and at a quicker rate too (taking minutes or hours).

“Our algorithm can ensure these groups receive a fair allocation of resources at relatively low cost,” says Zick.

In addition to exploring issues related to fairness, Zick and his team study how to ensure algorithms make the “right” decisions, as well as how people behave in collaborative versus competitive group environments.

Paper:
Group-Fairness in Influence Maximisation

Trending Posts