Page 60 - Read Online
P. 60

Page 253                          Su et al. Intell Robot 2022;2(3):244­74  I http://dx.doi.org/10.20517/ir.2022.17

               → Income → Loan.


               Similar to no unresolved discrimination, no proxy discrimination [28]  also focuses on indirect discrimination.
               Given a causal graph, if this criterion is satisfied, the effects of the sensitive attribute    on the output    cannot
               be transmitted through any proxy variable    (which is also denoted as redlining variable). A proxy variable is
               a descendant of sensitive attribute    and the ancestor of decision attribute   . It is labeled as a proxy because
               it is exploited to capture the information of   . The outcome of an automated decision making    exhibits no
               proxy discrimination if the equality of the following equation is valid for all potential proxies   :


                                       (  |    (   =    0 )) =   (  |    (   =    1 )) ∀   0 ,    1 ∈       (  )  (6)



               In other words, this notion implies that changing the value of    should not have any impact on the prediction.
               A simple example is shown in Figure 5. ZipCode is a redlining variable due to it reflects the information of
               the sensitive attribute Race. There is no proxy discrimination in causal graph shown in Figure 5(c), since the
               causal path Race → ZipCode → Loan has been blocked by intervening ZipCode.


               No unresolved discrimination is a flawed definition of fairness. Specifically, no unresolved discrimination
               criterion is unable to identify some counterfactual unfair scenarios where some attributes are deemed as the
               resolved attributes. On the other hand, policy makers and domain professionals should carefully examine the
               relevance between sensitive variables and other endogenous variables so as to discover all resolving attributes
               and potential proxies that may lead to discrimination spread.

               4.2. Individual causality­based fairness notions
               Different from group fairness notions that measure the differences in the outcome of decision models between
               advantaged groups and disadvantaged ones, individual fairness notions aim to examine whether the outcome
               of decision models is fair to each individual in the population. Some representative group causality-based
               fairness notions are discussed here.

               4.2.1. Counterfactual fairness
               An outcome    achieves counterfactual fairness towards an individual    (i.e., O = o) if the probability of the
               outcome    =    for such individual    is the same as the probability of    =    for the same individual whose value
               of sensitive attribute changing to another one. Formally, counterfactual fairness can be expressed as follows
               for any O = o:

                                                                            −
                                       |  (      |O = o,    =    ) −   (      |O = o,    =    )| ≤     (7)
                                                        −
                                            +
                                                                −
               where O ⊆ V \ {  ,  } is the subset of endogenous variables except sensitive variables and decision variables.
               Any context O = o represents a certain sub-group of the population, specifically, when O = V \ {  ,  }, it
               representsaspecificindividual. AccordingtoEquation(7), thedecisionmodelachievescounterfactualfairness
               if, for every possible individual (O = o,    =    ) of the entire population, the probability distribution of the
                                                      −
               outcome    is the same in both the actual (   =    ) and counterfactual (   =    ) worlds.
                                                                              +
                                                       −
               Counterfactual fairness was proposed by Kusner et al. [11] . They empirically tested whether the automated
               decision making systems are counterfactual fairness by generating the samples given the observed sensitive at-
               tribute value and their counterfactual sensitive value; then, they fitted decision models to both the original and
               counterfactual sampled data and examined the differences in the prediction distribution of predictor between
               the original and the counterfactual data. If an outcome    is fair, the predictor is expected that the predicted
               results of actual and counterfactual distributions lie exactly on top of each other.
   55   56   57   58   59   60   61   62   63   64   65