Aequitas Machine Learning Bias Audit Toolkit

This open source tool helps organizations identify and avoid biases when developing machine learning based risk learning tools.

Resource Details


Article and Tool


Machine Learning, AI, and Data Science based predictive tools are being increasingly used in problems that can have a drastic impact on people’s lives in policy areas such as criminal justice, education, public health, workforce development, and social services. Recent work has raised concerns on the risk of unintended bias in these models, affecting individuals from certain groups unfairly. While a lot of bias metrics and fairness definitions have been proposed, there is no consensus on which definitions and metrics should be used in practice to evaluate and audit these systems. There has been very little empirical work done on using and evaluating these measures on real-world problems, especially in public policy.

Aequitas is an open-source bias audit toolkit developed by the Center for Data Science and Public Policy at The University of Chicago that can be used to audit the predictions of machine learning-based risk learning tools to understand different types of biases and make informed decisions about developing and deploying such systems.

Do you have feedback on this resource?

Thank you for your feedback as we strive to curate and publish resources to help social impact organizations succeed with data.

Send us a note

Explore More

Related Guides & Resources