Expect fairness in a model by calculating disparities among features, score (binary or continuous), and a label (binary) in a table using Aequitas.
expect_table_binary_label_model_bias
This expectation level is BETA
Contributors:
Tags:
Metrics:
Description
Expect fairness in a model by calculating disparities among features, score (binary or continuous), and a label (binary) in a table using Aequitas.
Using Aeqitas we evaluate predicted and true values to evaluate certain metrics on how a classider model imposes bias on a given attribute group. Requires columns score (binary or continuous) and label_value (binary). For more information go to https://dssg.github.io/aequitas/examples/compas_demo.html
expect_table_binary_label_model_bias is a Batch Expectation.
Args:
- y_true (str): The column name of the actual y vlaue. Must be binary
- y_pred (str): The column name of the modeled y value. Must be binary or continuous
Keyword Args:
- partial_success (boolean): If True, expectations will pass if supervised or supervised fairness are observed even if overall fairness was false.
- reference_group (dict): A JSON-serializable dictionary (nesting allowed) that will be used to compare in reference to the group specified. Ex: {'race':'Caucasian', 'sex':'Male', 'age_cat':'25 - 45'}.
- alpha (float): A float between 0 and 1 that determines statistical significance level to use in significance determination. Default is .05
Returns:
Want to make your own Expectation or an improvement to this one?
We've put together some great how to guides (including videos) on how to create your own expectations in a flash!
You can see those resources here: Contributor Resources