rank#
Methods to rank/order data by cleanlab’s label quality score.
Except for order_label_issues
, which operates only on the subset of the data identified
as potential label issues/errors, the methods in this module can be used on whichever subset
of the dataset you choose (including the entire dataset) and provide a label quality score for
every example. You can then do something like: np.argsort(label_quality_score)
to obtain ranked
indices of individual data.
CAUTION: These label quality scores are computed based on pred_probs from your model that must be outofsample! You should never provide predictions on the same examples used to train the model, as these will be overfit and unsuitable for finding labelerrors. To obtain outofsample predicted probabilities for every datapoint in your dataset, you can use crossvalidation. Alternatively it is ok if your model was trained on a separate dataset and you are only evaluating labels in data that was previously heldout.
Functions:
Returns the "confidence weighted entropy" labelquality score for each datapoint. 


Returns label quality scores based on predictions from an ensemble of models. 

Returns label quality scores for each datapoint. 

Returns the "normalized margin" labelquality score for each datapoint. 

Returns the selfconfidence labelquality score for each datapoint. 

Sorts label issues by label quality score. 
 cleanlab.rank.get_confidence_weighted_entropy_for_each_label(labels: numpy.array, pred_probs: numpy.array) numpy.array [source]#
Returns the “confidence weighted entropy” labelquality score for each datapoint.
This is a function to compute labelquality scores for classification datasets, where lower scores indicate labels less likely to be correct.
“confidence weighted entropy” is the normalized entropy divided by “selfconfidence”.
 Parameters
labels (
np.array
) – Labels in the same format expected by theget_label_quality_scores
function.pred_probs (
np.array
) – Predictedprobabilities in the same format expected by theget_label_quality_scores
function.
 Returns
label_quality_scores – An array of scores (between 0 and 1) for each example of its likelihood of being correctly labeled.
 Return type
np.array
 cleanlab.rank.get_label_quality_ensemble_scores(labels: numpy.array, pred_probs_list: List[numpy.array], *, method: str = 'self_confidence', adjust_pred_probs: bool = False, weight_ensemble_members_by: str = 'accuracy', verbose: bool = True) numpy.array [source]#
Returns label quality scores based on predictions from an ensemble of models.
This is a function to compute labelquality scores for classification datasets, where lower scores indicate labels less likely to be correct.
Ensemble scoring requires a list of pred_probs from each model in the ensemble.
For each pred_probs in list, compute label quality score. Take the average of the scores with the chosen weighting scheme determined by weight_ensemble_members_by.
Score is between 0 and 1:
1 — clean label (given label is likely correct).
0 — dirty label (given label is likely incorrect).
 Parameters
labels (
np.array
) – Labels in the same format expected by theget_label_quality_scores
function.pred_probs_list (
List[np.array]
) – Each element in this list should be an array of pred_probs in the same format expected by theget_label_quality_scores
function. Each element of pred_probs_list corresponds to the predictions from one model for all examples.method (
{"self_confidence", "normalized_margin", "confidence_weighted_entropy"}
, default"self_confidence"
) – Label quality scoring method. Seeget_label_quality_scores
for scenarios on when to use each method.adjust_pred_probs (
bool
, optional) – adjust_pred_probs in the same format expected by theget_label_quality_scores
function.weight_ensemble_members_by (
{"uniform", "accuracy"}
, default"accuracy"
) –Weighting scheme used to aggregate scores from each model:
”uniform”: take the simple average of scores
”accuracy”: take weighted average of scores, weighted by model accuracy
verbose (
bool
, defaultTrue
) – Set toFalse
to suppress all print statements.
 Returns
label_quality_scores
 Return type
np.array
See also
 cleanlab.rank.get_label_quality_scores(labels: numpy.array, pred_probs: numpy.array, *, method: str = 'self_confidence', adjust_pred_probs: bool = False) numpy.array [source]#
Returns label quality scores for each datapoint.
This is a function to compute labelquality scores for classification datasets, where lower scores indicate labels less likely to be correct.
Score is between 0 and 1.
1  clean label (given label is likely correct). 0  dirty label (given label is likely incorrect).
 Parameters
labels (
np.array
) – A discrete vector of noisy labels, i.e. some labels may be erroneous. Format requirements: for dataset with K classes, labels must be in 0, 1, …, K1.pred_probs (
np.array
, optional) –An array of shape
(N, K)
of modelpredicted probabilities,P(label=kx)
. Each row of this matrix corresponds to an example x and contains the modelpredicted probabilities that x belongs to each possible class, for each of the K classes. The columns must be ordered such that these probabilities correspond to class 0, 1, …, K1.Caution: pred_probs from your model must be outofsample! You should never provide predictions on the same examples used to train the model, as these will be overfit and unsuitable for finding labelerrors. To obtain outofsample predicted probabilities for every datapoint in your dataset, you can use crossvalidation. Alternatively it is ok if your model was trained on a separate dataset and you are only evaluating data that was previously heldout.
method (
{"self_confidence", "normalized_margin", "confidence_weighted_entropy"}
, default"self_confidence"
) –Label quality scoring method.
Letting
k = labels[i]
andP = pred_probs[i]
denote the given label and predicted classprobabilities for datapoint i, its score can either be:'normalized_margin'
:P[k]  max_{k' != k}[ P[k'] ]
'self_confidence'
:P[k]
'confidence_weighted_entropy'
:entropy(P) / self_confidence
Let
C = {0, 1, ..., K}
denote the classification task’s specified set of classes.The normalized_margin score works better for identifying class conditional label errors, i.e. examples for which another label in C is appropriate but the given label is not.
The self_confidence score works better for identifying alternative label issues corresponding to bad examples that are: not from any of the classes in C, welldescribed by 2 or more labels in C, or generally just outofdistribution (ie. anomalous outliers).
adjust_pred_probs (
bool
, optional) – Account for class imbalance in the labelquality scoring by adjusting predicted probabilities via subtraction of class confident thresholds and renormalization. Set this toTrue
if you prefer to account for classimbalance. See Northcutt et al., 2021.
 Returns
label_quality_scores – Scores are between 0 and 1 where lower scores indicate labels less likely to be correct.
 Return type
np.array
 cleanlab.rank.get_normalized_margin_for_each_label(labels: numpy.array, pred_probs: numpy.array) numpy.array [source]#
Returns the “normalized margin” labelquality score for each datapoint.
This is a function to compute labelquality scores for classification datasets, where lower scores indicate labels less likely to be correct.
Letting k denote the given label for a datapoint, the normalized margin is
(p(label = k)  max(p(label != k)))
, i.e. the probability of the given label minus the probability of the argmax label that is not the given label. This gives you an idea of how likely an example is BOTH its given label AND not another label, and therefore, scores its likelihood of being a good label or a label error.Normalized margin works better for finding class conditional label errors where there is another label in the class that is better than the given label.
 Parameters
labels (
np.array
) – Labels in the same format expected by theget_label_quality_scores
function.pred_probs (
np.array
) – Predictedprobabilities in the same format expected by theget_label_quality_scores
function.
 Returns
label_quality_scores – An array of scores (between 0 and 1) for each example of its likelihood of being correctly labeled.
normalized_margin = prob_label  max_prob_not_label
 Return type
np.array
 cleanlab.rank.get_self_confidence_for_each_label(labels: numpy.array, pred_probs: numpy.array) numpy.array [source]#
Returns the selfconfidence labelquality score for each datapoint.
This is a function to compute labelquality scores for classification datasets, where lower scores indicate labels less likely to be correct.
The selfconfidence is the holdout probability that an example belongs to its given class label.
Selfconfidence works better for finding outofdistribution (OOD) examples, weird examples, bad examples, multilabel, and other types of label errors.
 Parameters
labels (
np.array
) – Labels in the same format expected by theget_label_quality_scores
function.pred_probs (
np.array
) – Predictedprobabilities in the same format expected by theget_label_quality_scores
function.
 Returns
label_quality_scores – An array of holdout probabilities that each example in pred_probs belongs to its label.
 Return type
np.array
 cleanlab.rank.order_label_issues(label_issues_mask: numpy.array, labels: numpy.array, pred_probs: numpy.array, *, rank_by: str = 'self_confidence', rank_by_kwargs: dict = {}) numpy.array [source]#
Sorts label issues by label quality score.
Default label quality score is “self_confidence”.
 Parameters
label_issues_mask (
np.array
) – A boolean mask for the entire dataset whereTrue
represents a label issue andFalse
represents an example that is accurately labeled with high confidence.labels (
np.array
) – Labels in the same format expected by theget_label_quality_scores
function.pred_probs (
np.array (shape (N
,K))
) – Predictedprobabilities in the same format expected by theget_label_quality_scores
function.rank_by (
str
, optional) – Score by which to order label error indices (in increasing order). See the method argument ofget_label_quality_scores
.rank_by_kwargs (
dict
, optional) – Optional keyword arguments to pass intoget_label_quality_scores
function. Accepted args include adjust_pred_probs.
 Returns
label_issues_idx – Return an array of the indices of the label issues, ordered by the labelquality scoring method passed to rank_by.
 Return type
np.array