Skip to main content

A fairness library in PyTorch.

Project description

fairret - a fairness library in PyTorch

Licence PyPI - Version Static Badge Static Badge

The goal of fairret is to serve as an open-source library for measuring and mitigating statistical fairness in PyTorch models.

The library is designed to be

  1. flexible in how fairness is defined and pursued.
  2. easy to integrate into existing PyTorch pipelines.
  3. clear in what its tools can and cannot do.

Central to the library is the paradigm of the fairness regularization term (fairrets) that quantify unfairness as differentiable PyTorch loss functions.

These can be minimized jointly with other losses, like the binary cross-entropy error, by just adding them together!

Quickstart

It suffices to simply choose a statistic that should be equalized across groups and a fairret that quantifies the gap.

The model can then be trained as follows:

import torch.nn.functional as F
from fairret.statistic import PositiveRate
from fairret.loss import NormLoss

statistic = PositiveRate()
norm_fairret = NormLoss(statistic)

def train(model, optimizer, train_loader):
     for feat, sens, target in train_loader:
            optimizer.zero_grad()
            
            logit = model(feat)
            bce_loss = F.binary_cross_entropy_with_logits(logit, target)
            fairret_loss = norm_fairret(logit, sens)
            loss = bce_loss + fairret_loss
            loss.backward()
            
            optimizer.step()

No special data structure is required for the sensitive features. If the training batch contains $N$ elements, then sens should be a tensor of floats with shape $(N, d_s)$, with $d_s$ the number of sensitive features. Like any categorical feature, it is expected that categorical sensitive features are one-hot encoded.

A notebook with a full example pipeline is provided here: simple_pipeline.ipynb.

We also host documentation.

Installation

The fairret library can be installed via PyPi:

pip install fairret

A minimal list of dependencies is provided in pyproject.toml.

If the library is installed locally, the required packages can be installed via pip install .

Warning: AI fairness != fairness

There are many ways in which technical approaches to AI fairness, such as this library, are simplistic and limited in actually achieving fairness in real-world decision processes.

More information on these limitations can be found here or here.

Future plans

The library maintains a core focus on only fairrets for now, yet we plan to add more fairness tools that align with the design principles in the future. These may involve breaking changes. At the same time, we'll keep reviewing the role of this library within the wider ecosystem of fairness toolkits.

Want to help? Please don't hesitate to open an issue, draft a pull request, or shoot an email to maarten.buyl@ugent.be.

Citation

This framework will be presented as a paper at ICLR 2024. If you found this library useful in your work, please consider citing it as follows:

@inproceedings{buyl2024fairret,
    title={fairret: a Framework for Differentiable Fairness Regularization Terms},
    author={Buyl, Maarten and Defrance, Marybeth and De Bie, Tijl},
    booktitle={International Conference on Learning Representations},
    year={2024}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

fairret-0.1.3.tar.gz (136.9 kB view hashes)

Uploaded Source

Built Distribution

fairret-0.1.3-py3-none-any.whl (20.3 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page