Fairness Without Demographics in Repeated Loss Minimization

From statwiki
Revision as of 18:25, 23 October 2018 by S2ganapa (talk | contribs) (Added Critiques and related works section)
Jump to: navigation, search

This page contains the summary of the paper "Fairness Without Demographics in Repeated Loss Minimization" by Hashimoto, T. B., Srivastava, M., Namkoong, H., & Liang, P. which was published at the International Conference of Machine Learning (ICML) in 2018.


Usually, machine learning models are minimized in their average loss to achieve high overall accuracy. While this works well for the majority, minority groups that use the system suffer high error rates because they contribute fewer data to the model. This phenomenon is known as representation disparity and has been observed in many models that, for instance, recognize faces, or identify the language. This disparity even increases as minority users suffer higher error rates, and therefore, are less likely to use the system in the future. As consequence, minority groups further shrink, and therefore, less data is available for the next optimization of the model. With less data the disparity becomes even worse - a phenomenon referred to as disparity amplification.

In this paper, Hashimoto et al. provide a strategy for controlling the worst case risk for all groups. They first show that standard empirical risk minimization (ERM) does not control the loss of minority groups, and thus causes representation disparity and its amplification over time (even if the model is fair in the beginning). Second, the researchers try to mitigate this unfairness by proposing the use of distributionally robust optimization (DRO). Indeed Hashimoto et al. are able to show that DRO can bound the loss for minority groups at every step of time and is fair on models that ERM turns unfair by applying it to Amazon Mechanical Turk task.

Note on Fairness

Hashimoto et al. follow the difference principle to achieve and measure fairness. It is defined as the maximization of the welfare of the worst-off group rather than the whole group (cf. utilitarianism).

Related Works

The following summarises the recent advancements in fairness in Machine learning

1. Rawls Difference principle - Defines that maximizing the welfare of the worst-off group is fair and stable over time.

2. Use of race in recidivism protection.

3. Guaranteeing fairness for a protected label through constraints such as equalized odds, disparate impact, and calibration.

4. Individual notions of fairness using fixed similarity function or subgroups of a set of protected labels.

5. Using online settings like fairness in bandit learning and analyzing fairness over time in the context of constraint-based fairness criteria.

Representation Disparity

If a user makes a query [math]Z \sim P[/math], the model [math]\theta \in \Theta[/math] makes a prediction, and the user experiences loss [math]\ell (\theta; Z)[/math]. The expected loss of of a model [math]\theta[/math] is denoted as the risk [math]\mathcal{R}(\theta) = \mathbb{E}_{Z \sim P} [\ell (\theta; Z)] [/math]. While the queries (i.e. users using the system and providing data to the model) are made by users from [math]K[/math] latent groups such that [math]Z \sim P := \sum_{k \in [K]} \alpha_kP_k[/math], neither the actual population proportions [math]\alpha_k[/math] nor the group distributions [math]P_k[/math] are known. Therefore, Hashimoto et al.'s goal is to control the worst-case risk over all groups [math]K[/math] (and not just the risk of the worst-off group):

\begin{align} \mathcal{R}_{max}(\theta) := \underset{k \in [K]}{max} \mathcal{R}_k(\theta), \qquad \mathcal{R}_k(\theta) := \mathbb{E}_{P_k} [\ell (\theta; Z)] \end{align}

There is high representation disparity if the expected loss of the model [math]\mathcal{R}(\theta)[/math] is low, but the worst-case risk [math]\mathcal{R}_{max}(\theta)[/math] of one group is high. This means that a model with high representation disparity performs well on average (i.e. has low overall loss), but fails to represent some groups [math]k[/math] (i.e. the risk for the worst-off group is high). In order to make models with high representation disparity fairer, Hashimot et al. tried to minimize this worst-case risk [math]\mathcal{R}_{max}(\theta)[/math] in general.

Disparity Amplification

Representation disparity can amplify as time passes and loss is minimized. Over [math]t = 1, 2, ..., T[/math] minimization rounds the group proportions [math]\alpha_k^{(t)}[/math] vary dependent on past losses. At each round the expected number of users [math]\lambda_k^{(t+1)}[/math] from group [math]k[/math] is determined by

\begin{align} \lambda_k^{(t+1)} := \lambda_k^{(t)} \nu(\mathcal{R}_k(\theta^{(t)})) + b_k \end{align}

where [math]\lambda_k^{(t)} \nu(\mathcal{R}_k(\theta^{(t)}))[/math] describes the fraction of retained users from the previous optimization and [math]b_k[/math] is the number of new users of group [math]k[/math]. To put simply, the number of expected users of a group depends on the number of new users of that group and the fraction of users that continue to use the system from the previous optimization step. If fewer users from minority groups return to the model (i.e. the model as a low retention rate of minority group users), Hashimoto et al. argue that the representation disparity amplifies.

Empirical Risk Minimization (ERM)

Without the knowledge of population proportions [math]\alpha_k^{(t)}[/math], the new user rate [math]b_k[/math], and the retention function [math]\nu[/math] it is hard in practice, however, to control the worst-case risk over all time periods [math]\mathcal{R}_{max}^T[/math] for all groups. That is why it is the standard approach to fit a sequence of models [math]\theta^{(t)}[/math] by empirically approximating them. Using ERM, for instance, the optimal model is approached by minimizing the loss of the model:

\begin{align} \theta^{(t)} = arg min_{\theta \in \Theta} \sum_i \ell(\theta; Z_i^{(t)}) \end{align}

However, ERM fails to prevent disparity amplification. By minimizing the expected loss of the model, minority groups experience higher loss (because the loss of the majority group is minimized), and do not return to use the system. In doing so, the population proportions [math]\alpha_k^{(t)}[/math] shift, and certain minority groups contribute even less to the system. This is mirrored in the expected user count [math]\lambda^{(t)}[/math] at each optimization point. In their paper Hashimoto et al. show that, if using ERM, [math]\lambda^{(t)}[/math] is unstable because it loses its fair fixed point (i.e. the population fraction where risk minimization maintains the same population fraction over time). Therefore, ERM fails to control minority risk over time and is considered unfair.

Distributonally Robust Optimization (DRO)

To overcome the unfairness of ERM, Hashimoto et al. developed a distributionally robust optimization (DRO). At this point the goal is still to minimize the worst-case group risk over a single time-step [math]\mathcal{R}_{max} (\theta^{(t)}) [/math] (time steps are omitted in this sections' formulas). As previously mentioned, this is difficult to do because neither the population proportions [math]\alpha_k [/math] nor group distributions [math]P_k [/math] are known. Therefore, Hashimoto et al. developed an optimization technique that is robust "against all directions around the data generating distribution". This refers to the notion that DRO is robust to any group distribution [math]P_k [/math] whose loss other optimization techniques such as ERM might try to optimize. To create this distributionally robustness, the optimizations risk function [math]\mathcal{R}_{dro} [/math] has to "up-weigh" data [math]Z[/math] that cause high loss [math]\ell(\theta, Z)[/math]. In other words, the risk function has to over-represent mixture components (i.e. group distributions [math]P_k [/math]) in relation to their original mixture weights (i.e. the population proportions [math]\alpha_k [/math]) for groups that suffer high loss.

To do this Hashimoto et al. considered the worst-case loss (i.e. the highest risk) over all perturbations [math]P_k [/math] around [math]P[/math] within a certain limit (because obviously not every outlier should be up-weighed). This limit is described by the [math]\chi^2[/math]-divergence (i.e. the distance, roughly speaking) between probability distributions. For two distributions [math]P[/math] and [math]Q[/math] the divergence is defined as [math]D_{\chi^2} (P || Q):= \int (\frac{dP}{dQ} - 1)^2[/math]. With the help of the [math]\chi^2[/math]-divergence, Hashimoto et al. defined the chi-squared ball [math]\mathcal{B}(P,r)[/math] around the probability distribution P. This ball is defined so that [math]\mathcal{B}(P,r) := \{Q \ll P : D_{\chi^2} (Q || P) \leq r \}[/math]. With the help of this ball the worst-case loss (i.e. the highest risk) over all perturbations [math]P_k [/math] that lie inside the ball (i.e. within reasonable range) around the probability distribution [math]P[/math] can be considered. This loss is given by

\begin{align} \mathcal{R}_{dro}(\theta, r) := \underset{Q \in \mathcal{B}(P,r)}{sup} \mathbb{E}_Q [\ell(\theta;Z)] \end{align}

which for [math]P:= \sum_{k \in [K]} \alpha_k P_k[/math] for all models [math]\theta \in \Theta[/math] where [math]r_k := (1/a_k -1)^2[/math] bounds the risk [math]\mathcal{R}_k(\theta) \leq \mathcal{R}_{dro} (\theta; r_k)[/math] for each group with risk [math]\mathcal{R}_k(\theta)[/math]. Furthermore, if the lower bound on the group proportions [math]\alpha_{min} \leq min_{k \in [K]} \alpha_k[/math] is specified, and the radius is defined as [math]r_{max} := (1/\alpha_{min} -1)^2[/math], the worst-case risk [math]\mathcal{R}_{max} (\theta) [/math] can be controlled by [math]\mathcal{R}_{dro} (\theta; r_{max}) [/math] by forming an upper bound that can be minimized.

Optimization of DRO

To minimize [math]\mathcal{R}_{dro}(\theta, r) := \underset{Q \in \mathcal{B}(P,r)}{sup} \mathbb{E}_Q [\ell(\theta;Z)][/math] Hashimoto et al. look at the dual of this maximization problem (i.e. every maximization problem can be transformed into a minimization problem and vice-versa). This dual is given by the minimization problem

\begin{align} \mathcal{R}_{dro}(\theta, r) = \underset{\eta \in \mathbb{R}}{inf} \left\{ F(\theta; \eta):= C\left(\mathbb{E}_P \left[ [\ell(\theta;Z) - \eta]_+^2 \right] \right)^\frac{1}{2} + \eta \right\} \end{align}

with [math]C = (2(1/a_{min} - 1)^2 + 1)^{1/2}[/math]. [math]\eta[/math] describes the dual variable (i.e. the variable that appears in creating the dual). Since [math]F(\theta; \eta)[/math] involves an expectation [math]\mathbb{E}_P[/math] over the data generating distribution [math]P[/math], [math]F(\theta; \eta)[/math] can be directly minimized. For convex losses [math]\ell(\theta;Z)[/math], [math]F(\theta; \eta)[/math] is convex, and can be minimized by performing a binary search over [math]\eta[/math]. In their paper, Hashimoto et al. further show that optimizing [math]\mathcal{R}_{dro}(\theta, r_{max})[/math] at each time step controls the future worst-case risk [math]\mathcal{R}_{max} (\theta) [/math], and therefore retention rates. That means if the initial group proportions satisfy [math]\alpha_k^{(0)} \geq a_{min}[/math], and [math]\mathcal{R}_{dro}(\theta, r_{max})[/math] is optimized for every time step (and therefore [math]\mathcal{R}_{max} (\theta) [/math] is minimized), [math]\mathcal{R}_{max}^T (\theta) [/math] over all time steps is controlled. In other words, optimizing [math]\mathcal{R}_{dro}(\theta, r_{max})[/math] every time step is enough to avoid disparity amplification.


This paper works on representational disparity which is a critical problem to contribute to. The methods are well developed and the paper reads coherently. However, the authors have several limiting assumptions that are not very intuitive or scientifically suggestive. The first assumption is that, the [math]\eta[/math] function denoting the fraction of users retained is differentiable and strictly decreasing function. This assumption does not seem practical. The second assumption is that the learned parameters are having a Poisson distribution. There is no explanation of such an assumption and reasons hinted at is hand-wavy at best. Though the authors are building a case against the Empirical risk minimization method, this method is exactly solvable when the data is linearly separable. The DRO method is computationally more complex than ERM and is not entirely clear if it will always have an advantage for a different class of problems.