Toyota Technological Institute at Chicago
June 16, 2020
In this talk I will discuss two lines of work involving learning in the presence of biased data and strategic behavior. In the first, we ask whether fairness constraints on learning algorithms can actually improve the accuracy of the classifier produced, when training data is unrepresentative or corrupted due to bias. Typically, fairness constraints are analyzed as a tradeoff with classical objectives such as accuracy. Our results here show there are natural scenarios where they can be a win-win, helping to improve overall accuracy. In the second line of work we consider strategic classification: settings where the entities being measured and classified wish to be classified as positive (e.g., college admissions) and will try to modify their observable features if possible to make that happen. We consider this in the online setting where a particular challenge is that updates made by the learning algorithm will change how the inputs behave as well.