Skip to main content

Eliminating Bias in Machine Learning for Real World Decision-Making Aids

Motivated by problems experienced within under-served communities and/or in under-resourced settings, we are working to define and quantify fairness in machine learning.  We want to assist in designing fair decision-making systems that rely on historical data.  This could be applied in cases such as creating a fair machine learning algorithm for aiding in deciding who to hire for a job or who to screen for breast cancer.  We need to be careful and recognize the potential for bias in the data and in the algorithm.

Goals

Automated tools may result in discriminative decision-making in the sense that they may treat individuals unfairly or unequally based on membership to a category or a minority, resulting in disparate treatment. This may happen when the training dataset is itself biased (e.g., if individuals belonging to a particular group have historically been discriminated against). However, it may also happen when the training dataset is unbiased, if the errors made by the system affect individuals belonging to a category or minority differently (e.g., if misclassification rates for Blacks are higher than for Whites).

Methods

Broadly speaking, one can distinguish between two types of discrimination: 1) disparate treatment (i.e., direct discrimination) and 2) disparate impact (i.e., indirect discrimination). Disparate treatment consists of rules  imposing different treatment to individuals that are similarly situated and that only differ in their protected characteristics (e.g., race, gender, sexual orientation). Disparate impact on the other hand does not explicitly use sensitive attributes to decide treatment but implicitly results in systematic different handling of individuals from protected groups.

Our purpose is to design ML algorithms such that they are not suffering from disparate impact/treatment.

There are three general approaches in designing fair ML algorithms: First, pre-processing approaches, which rely on modifying the data to eliminate or neutralize any preexisting bias and subsequently apply standard ML techniques. However, pre-processing approaches cannot be employed to eliminate bias arising from the algorithm itself. Second, post-processing approaches, which a-posteriori adjust the predictors learned using standard ML techniques to improve their fairness properties. The third type of approach, which most closely relates to our work, is an in-processing one. It consists in adding a fairness regularizer to the loss function objective, which serves to penalize discrimination, and thereby mitigating disparate treatment or disparate impact.

Project examples

We have proposed interpretable machine learning frameworks for learning optimal and fair decision trees for non-discriminative decision making, enabling the transition of automated data-driven decision-making systems to socially sensitive settings (e.g., to decide who to admit into a degree program or to prioritize individuals for public housing).

Share this
Become a USC CAIS partner through community projects, funding, volunteering, or research collaboration.