Balancing out Bias: Achieving Fairness Through Balanced Training

Xudong Han, Timothy Baldwin, Trevor Cohn


Abstract
Group bias in natural language processing tasks manifests as disparities in system error rates across texts authorized by different demographic groups, typically disadvantaging minority groups. Dataset balancing has been shown to be effective at mitigating bias, however existing approaches do not directly account for correlations between author demographics and linguistic variables, limiting their effectiveness. To achieve Equal Opportunity fairness, such as equal job opportunity without regard to demographics, this paper introduces a simple, but highly effective, objective for countering bias using balanced training.We extend the method in the form of a gated model, which incorporates protected attributes as input, and show that it is effective at reducing bias in predictions through demographic input perturbation, outperforming all other bias mitigation techniques when combined with balanced training.
Anthology ID:
2022.emnlp-main.779
Volume:
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
11335–11350
Language:
URL:
https://aclanthology.org/2022.emnlp-main.779
DOI:
10.18653/v1/2022.emnlp-main.779
Bibkey:
Cite (ACL):
Xudong Han, Timothy Baldwin, and Trevor Cohn. 2022. Balancing out Bias: Achieving Fairness Through Balanced Training. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 11335–11350, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
Balancing out Bias: Achieving Fairness Through Balanced Training (Han et al., EMNLP 2022)
Copy Citation:
PDF:
https://aclanthology.org/2022.emnlp-main.779.pdf
Software:
 2022.emnlp-main.779.software.zip