I am deeply
interested in theories of economic
collaboration and resource allocation; I
have spent a lot of time thinking about
*cooperative games*, a mathematical
model of agent collaboration, and using
algorithmic means to find solutions for
them.

Some recent works along this line involve exploring collaboration under uncertainty: how can we decide which coalitions should form and how payoffs should we allocate, given that we only observe specific coalitions and their values? In another line of work, we study the classic economic problem of rent division: a group of roommates needs to decide how to allocate rooms and divide rent. How should they do this in a fair manner? In a 2016 EC paper we provide theoretical and emprical answers to this question, as well as confirm our findings with user studies on real data!

Big data and
machine learning techniques are being increasingly
used to make decisions about important, and often sensitive, aspects of our lives, such as healthcare and finance.
These algorithms often *learn from data*; that is, they try to predict someone's income levels based on various
features, such as their age, salary or marital status.
These algorithms are often very, very good at their job (hence their popularity): they are able to process a huge amount of data and offer accurate predictions
that would have otherwise been made by human decision makers with only very partial, biased data (and would certainly require much more time).
It is often thought that algorithms are unbiased, in the sense that they do not hold any prior opinions that affect their decisions.
In particular, we would not like our algorithms to base their predictions on sensitive features - such as ethnicity or gender.

So, did a big data algorithm base its decisions on "protected" user features? The problem is that in many cases it is very hard to tell: big data algorithms are often extremely complex, so we cannot be sure whether an algorithm used a protected feature (say, gender), or based its prediction on a correlated input (say, weight lifting ability).

Our research aims at developing formal methods that offer some transparency into the way that the algorithms use their inputs. Using tools from game theory, formal causality analysis and statistics, we offer influence measures that can indicate how important was a feature in making a decision about an individual, or a protected group. Our IJCAI 2015 paper offers some theoretical insights as to how such influence measures should be designed, whereas our 2016 Oakland paper studies how such measures can be used to generate user readable transparency reports.

Weighted voting games (WVGs) are a simple model for parliamentary voting. In this setting, each party has a weight, and a set of parties (a group of parties that joins together to support legislation or form a government) is called winning if its total weight exceeds a certain quota (for example, having more than 50% of the total votes). In this framework, one can naturally ask: how important, or influential, is a certain party? The Shapley value is a canonical way of measuring voting power in WVGs (and in coalitional games in general).

In our 2011 paper, we study the relation between voting power and changes to the threshold: in other words, what happens to parties' influence if we change the number of votes needed to pass a bill? In a subsequent 2013 paper we show some bounds on the amount of possible change in voting power as the quota changes. Our recent paper in IJCAI 2016, and another in SAGT 2016 study the likely behavior of voting power under certain weight distributions.

Cooperative game theory studies the interaction among collaborative agents. These agents form coalitions, generate revenue and divide profits. Traditionally, agent interactions with coalitions are assumed to be singular: each agent may belong to a single coalition, and may not divide resources among several groups. Our work studies an alternative model, in which each agent controls a resource that may be allocated to several coalitions. In other words, a coalition is not simply a group of players, but is rather a vector listing the amount of resources contributed by each player.

Coalitional stability is a fundamental concept in cooperative game theory. Briefly, once players formed coalitions and generated profits, we want to find a division of payoffs to players that ensures that every subset of players receives at least as much as it can make on its own. Translating this concept to cooperative games with overlapping coalitions is challenging: when deviating, a coalition of players may still maintain some relations with non-deviators (e.g. breaking up one project but keeping another); now, the non-deviators need to decide how they react to the deviation. In fact, their degree of lenience (or harshness) can determine the profitability of deviation. Our work studies the formal mathematical and computational model underlying this setting. We study various solution concepts for these games, compute optimal coalition structures, find core allocations for them, and explore sufficient conditions for their stability. See our JAIR paper and ArXiv paper for more details.

- Algorithmic Game Theory
- Cooperative Games
- Algorithmic Transparency
- Fair Division
- Computational Social Choice