Page 64 - Read Online
P. 64
Page 257 Su et al. Intell Robot 2022;2(3):24474 I http://dx.doi.org/10.20517/ir.2022.17
from the counterfactual world that belongs to another sensitive group. Kim et al. [54] addressed the limita-
tion that some causality-based methods cannot distinguish between information caused by the intervention
(i.e., sensitive variables) and information related to the intervention by decomposing external uncertainty into
intervention-independent variables and intervention-related ones. They proposed a method called DCEVAE,
which can estimate the total effect and counterfactual effects in the absence of full causal maps.
5.3. Postprocessing Causalitybased methods
Post-processing methods modify the outcome of the decision model to make fairer decisions [55,56] . For exam-
ple, Wu et al. [57] adopted the c-component factorization to decompose the counterfactual quantity, identified
the sources unidentifiable terms, and developed the lower and upper bounds of counterfactual fairness in
unidentifiable situations. In the post-processing stage, they reconstructed the trained decision model so as
to achieve counterfactual fairness. The counterfactual privilege algorithm [58] maximizes the overall benefit
while preventing an individual from obtaining beneficial effects exceeding the threshold due to the sensitive
attributes, so as to make the classifier achieve counterfactual fairness. Mishler et al. [59] suggested using doubly
robust estimators to post-process a trained binary predictor in order to achieve approximate counterfactual
equalized odds.
5.4. Which mechanism to use
Wediscussthevariousmechanismsforenhancingfairnessabove. Here, wefurthercomparethesemechanisms
and discuss the advantages and disadvantages of them, respectively. This section provides insights into how
to select suitable mechanisms for use in different scenarios based on the characteristics of these mechanisms.
Every type of mechanism has its advantages and disadvantages.
The pre-processing mechanism can be flexibly adapted to the downstream tasks since it can be used with any
classification algorithm. However, since the pre-processing mechanism is a general mechanism where the
extracted features can be widely applicable for various algorithms, there is high indeterminacy regarding the
accuracy of the trained decision models.
Similar to the pre-processing mechanism, the post-processing mechanism also can be flexibly used in any
decision model. Post-processing mechanisms are easier to fully eliminate discrimination for the decision
models, but the accuracy of the decision models depends on the performance they obtained in the training
stage [60] . Furthermore, post-processing mechanisms require access to all information of individuals during
testing, which may be unavailable because of reasons of privacy protection.
The in-processing mechanism is beneficial to enable a balance between accuracy and fairness of the decision
model, which is achieved by explicitly modulating the trade-off parameter in the objective function. However,
such mechanisms are tightly coupled with the machine learning algorithm itself and are difficult to optimize
in the application.
Based on the above discussion and the studies that attempt to comprehend which mechanism is best to use in
certain situations [61,62] , we can say that there is no single mechanism that outperforms the others in all cases,
and the choice of suitable mechanisms depends on the availability of sensitive variables during testing, the
characteristics of the dataset, and the desired fairness measure in the application. For example, when there
exists evident selection bias in a dataset, it is better to select the pre-process mechanism for use than the in-
process one. Therefore, more research is needed to develop robust fairness mechanisms or to design suitable
mechanisms for practical scenarios.