Fairness Without Demographics in Repeated Loss Minimization

From statwiki
Revision as of 14:41, 19 October 2018 by Mpafla (talk | contribs)
Jump to navigation Jump to search

This page contains the summary of the paper "Fairness Without Demographics in Repeated Loss Minimization" by Hashimoto, T. B., Srivastava, M., Namkoong, H., & Liang, P. which was published at the International Conference of Machine Learning (ICML) in 2018. In the following, an

Overview of the Paper

Introduction

Fairness

Example and Problem Setup

Why Empirical Risk Minimization (ERM) does not work

Distributonally Robust Optimization (DRO)

Risk Bounding Over Unknown Groups

At this point our goal is to minimize the worst-case group risk over a single time-step [math]\displaystyle{ \mathcal{R}_{max} (\theta^{(t)}) }[/math]. As previously mentioned, this is difficult to do because neither the population proportions [math]\displaystyle{ \{a_k\} }[/math] nor group distributions [math]\displaystyle{ \{P_k\} }[/math] are known. Therefore, Hashimoto et al. developed an optimization technique that is robust "against all directions around the data generating distribution". This refers to fact that this distributionally robust optimization (DRO) is robust to any group distribution [math]\displaystyle{ \{P_k\} }[/math] if the population proportion [math]\displaystyle{ \{a_k\} }[/math] is greater than or equal to the lowest population proportion [math]\displaystyle{ a_{min} }[/math].