Text Classification with Transformers and Datalab#

In this 5-minute quickstart tutorial, we use cleanlab to find potential label errors in an intent classification dataset composed of (text) customer service requests at an online bank. We consider a subset of the Banking77-OOS Dataset containing 1,000 customer service requests which can be classified into 10 categories corresponding to the intent of the request. Cleanlab automatically identifies bad examples in our dataset, including mislabeled data, out-of-scope examples (outliers), or otherwise ambiguous examples. Consider filtering or correcting such bad examples before you dive deep into modeling your data!

Overview of what we’ll do in this tutorial:

  • Use a pretrained transformer model to extract the text embeddings from the customer service requests

  • Train a simple Logistic Regression model on the text embeddings to compute out-of-sample predicted probabilities

  • Run cleanlab’s Datalab audit with these predictions and embeddings in order to identify problems like: label issues, outliers, and near duplicates in the dataset.

Quickstart

Already have (out-of-sample) pred_probs from a model trained on an existing set of labels? Maybe you have some numeric features as well? Run the code below to find any potential label errors in your dataset.

from cleanlab import Datalab

lab = Datalab(data=your_dataset, label_name="column_name_of_labels")
lab.find_issues(pred_probs=your_pred_probs, features=your_features)

lab.report()
lab.get_issues()

1. Install required dependencies#

You can use pip to install all packages required for this tutorial as follows:

!pip install sklearn sentence-transformers
!pip install "cleanlab[datalab]"
# Make sure to install the version corresponding to this tutorial
# E.g. if viewing master branch documentation:
#     !pip install git+https://github.com/cleanlab/cleanlab.git
[2]:
import re
import string
import pandas as pd
from sklearn.metrics import accuracy_score, log_loss
from sklearn.model_selection import cross_val_predict
from sklearn.linear_model import LogisticRegression
from sentence_transformers import SentenceTransformer

from cleanlab import Datalab

2. Load and format the text dataset#

[4]:
data = pd.read_csv("https://s.cleanlab.ai/banking-intent-classification.csv")
data.head()
[4]:
text label
0 i accidentally made a payment to a wrong account. what should i do? cancel_transfer
1 i no longer want to transfer funds, can we cancel that transaction? cancel_transfer
2 cancel my transfer, please. cancel_transfer
3 i want to revert this mornings transaction. cancel_transfer
4 i just realised i made the wrong payment yesterday. can you please change it to the right account? it's my rent payment and really really needs to be in the right account by tomorrow cancel_transfer
[5]:
raw_texts, labels = data["text"].values, data["label"].values
num_classes = len(set(labels))

print(f"This dataset has {num_classes} classes.")
print(f"Classes: {set(labels)}")
This dataset has 10 classes.
Classes: {'supported_cards_and_currencies', 'change_pin', 'card_payment_fee_charged', 'card_about_to_expire', 'apple_pay_or_google_pay', 'getting_spare_card', 'lost_or_stolen_phone', 'visa_or_mastercard', 'beneficiary_not_allowed', 'cancel_transfer'}

Let’s view the i-th example in the dataset:

[6]:
i = 1  # change this to view other examples from the dataset
print(f"Example Label: {labels[i]}")
print(f"Example Text: {raw_texts[i]}")
Example Label: cancel_transfer
Example Text: i no longer want to transfer funds, can we cancel that transaction?

The data is stored as two numpy arrays:

  1. raw_texts stores the customer service requests utterances in text format

  2. labels stores the intent categories (labels) for each example

Bringing Your Own Data (BYOD)?

You can easily replace the above with your own text dataset, and continue with the rest of the tutorial.

Next we convert the text strings into vectors better suited as inputs for our ML models.

We will use numeric representations from a pretrained Transformer model as embeddings of our text. The Sentence Transformers library offers simple methods to compute these embeddings for text data. Here, we load the pretrained electra-small-discriminator model, and then run our data through network to extract a vector embedding of each example.

[7]:
transformer = SentenceTransformer('google/electra-small-discriminator')
text_embeddings = transformer.encode(raw_texts)
No sentence-transformers model found with name /home/runner/.cache/torch/sentence_transformers/google_electra-small-discriminator. Creating a new one with MEAN pooling.
Some weights of the model checkpoint at /home/runner/.cache/torch/sentence_transformers/google_electra-small-discriminator were not used when initializing ElectraModel: ['discriminator_predictions.dense.bias', 'discriminator_predictions.dense.weight', 'discriminator_predictions.dense_prediction.weight', 'discriminator_predictions.dense_prediction.bias']
- This IS expected if you are initializing ElectraModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing ElectraModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).

Our subsequent ML model will directly operate on elements of text_embeddings in order to classify the customer service requests.

3. Define a classification model and compute out-of-sample predicted probabilities#

A typical way to leverage pretrained networks for a particular classification task is to add a linear output layer and fine-tune the network parameters on the new data. However this can be computationally intensive. Alternatively, we can freeze the pretrained weights of the network and only train the output layer without having to rely on GPU(s). Here we do this conveniently by fitting a scikit-learn linear model on top of the extracted embeddings.

To identify label issues, cleanlab requires a probabilistic prediction from your model for each datapoint. However these predictions will be overfit (and thus unreliable) for datapoints the model was previously trained on. cleanlab is intended to only be used with out-of-sample predicted class probabilities, i.e. on datapoints held-out from the model during the training.

Here we obtain out-of-sample predicted class probabilities for every example in our dataset using a Logistic Regression model with cross-validation.

[8]:
model = LogisticRegression(max_iter=400)

pred_probs = cross_val_predict(model, text_embeddings, labels, method="predict_proba")

4. Use cleanlab to find issues in your dataset#

Given feature embeddings and the (out-of-sample) predicted class probabilities obtained from any model you have, cleanlab can quickly help you identify low-quality examples in your dataset.

Here, we use cleanlab’s Datalab to find issues in our data. Datalab offers several ways of loading the data; we’ll simply wrap the training features and noisy labels in a dictionary.

[9]:
data_dict = {"texts": raw_texts, "labels": labels}

All that is need to audit your data is to call find_issues(). We pass in the predicted probabilities and the feature embeddings obtained above, but you do not necessarily need to provide all of this information depending on which types of issues you are interested in. The more inputs you provide, the more types of issues Datalab can detect in your data. Using a better model to produce these inputs will ensure cleanlab more accurately estimates issues.

[10]:
lab = Datalab(data_dict, label_name="labels")
lab.find_issues(pred_probs=pred_probs, features=text_embeddings)
Finding label issues ...
Finding outlier issues ...
Fitting OOD estimator based on provided features ...
Finding near_duplicate issues ...
Audit complete. 87 issues found in the dataset.

After the audit is complete, review the findings using the report method:

[11]:
lab.report()
Here is a summary of the different kinds of issues found in the data:

    issue_type  num_issues
         label          44
       outlier          39
near_duplicate           4

Dataset Information: num_examples: 1000, num_classes: 10


----------------------- label issues -----------------------

About this issue:
        Examples whose given label is estimated to be potentially incorrect
    (e.g. due to annotation error) are flagged as having label issues.


Number of examples with this issue: 44
Overall dataset quality in terms of this issue: 0.9560

Examples representing most severe instances of this issue:
     is_label_issue  label_score              given_label           predicted_label
981            True     0.000005     card_about_to_expire  card_payment_fee_charged
974            True     0.000150  beneficiary_not_allowed                change_pin
982            True     0.000218  apple_pay_or_google_pay      card_about_to_expire
990            True     0.000331  apple_pay_or_google_pay   beneficiary_not_allowed
971            True     0.000512  beneficiary_not_allowed                change_pin


---------------------- outlier issues ----------------------

About this issue:
        Examples that are very different from the rest of the dataset
    (i.e. potentially out-of-distribution or rare/anomalous instances).


Number of examples with this issue: 39
Overall dataset quality in terms of this issue: 0.9120

Examples representing most severe instances of this issue:
     is_outlier_issue  outlier_score
994              True       0.676322
999              True       0.686193
989              True       0.711223
433              True       0.711974
990              True       0.713793


------------------ near_duplicate issues -------------------

About this issue:
        A (near) duplicate issue refers to two or more examples in
    a dataset that are extremely similar to each other, relative
    to the rest of the dataset.  The examples flagged with this issue
    may be exactly duplicated, or lie atypically close together when
    represented as vectors (i.e. feature embeddings).


Number of examples with this issue: 4
Overall dataset quality in terms of this issue: 0.0657

Examples representing most severe instances of this issue:
     is_near_duplicate_issue  near_duplicate_score                                 near_duplicate_sets  distance_to_nearest_neighbor
160                     True              0.006237  [148, 219, 234, 118, 201, 223, 125, 140, 978, 172]                      0.006237
148                     True              0.006237  [160, 219, 234, 223, 140, 118, 201, 125, 229, 978]                      0.006237
546                     True              0.006485  [514, 523, 570, 569, 458, 528, 757, 137, 539, 827]                      0.006485
514                     True              0.006485  [546, 523, 570, 757, 761, 569, 528, 527, 458, 137]                      0.006485
481                    False              0.008165  [475, 493, 486, 849, 466, 845, 434, 795, 468, 834]                      0.008165

Label issues#

The report indicates that cleanlab identified many label issues in our dataset. We can see which examples are flagged as likely mislabeled and the label quality score for each example using the get_issues method, specifying label as an argument to focus on label issues in the data.

[12]:
label_issues = lab.get_issues("label")
label_issues.head()
[12]:
is_label_issue label_score given_label predicted_label
0 False 0.806122 cancel_transfer cancel_transfer
1 False 0.270942 cancel_transfer cancel_transfer
2 False 0.695405 cancel_transfer cancel_transfer
3 False 0.179731 cancel_transfer apple_pay_or_google_pay
4 False 0.822717 cancel_transfer cancel_transfer

This method returns a dataframe containing a label quality score for each example. These numeric scores lie between 0 and 1, where lower scores indicate examples more likely to be mislabeled. The dataframe also contains a boolean column specifying whether or not each example is identified to have a label issue (indicating it is likely mislabeled).

We can get the subset of examples flagged with label issues, and also sort by label quality score to find the indices of the 5 most likely mislabeled examples in our dataset.

[13]:
identified_label_issues = label_issues[label_issues["is_label_issue"] == True]
lowest_quality_labels = label_issues["label_score"].argsort()[:5].to_numpy()

print(
    f"cleanlab found {len(identified_label_issues)} potential label errors in the dataset.\n"
    f"Here are indices of the top 5 most likely errors: \n {lowest_quality_labels}"
)
cleanlab found 44 potential label errors in the dataset.
Here are indices of the top 5 most likely errors:
 [981 974 982 990 971]

Let’s review some of the most likely label errors.

Here we display the top 5 examples identified as the most likely label errors in the dataset, together with their given (original) label and a suggested alternative label from cleanlab.

[14]:
data_with_suggested_labels = pd.DataFrame(
    {"text": raw_texts, "given_label": labels, "suggested_label": label_issues["predicted_label"]}
)
data_with_suggested_labels.iloc[lowest_quality_labels]
[14]:
text given_label suggested_label
981 i was charged for getting cash. card_about_to_expire card_payment_fee_charged
974 can i change my pin on holiday? beneficiary_not_allowed change_pin
982 will i be sent a new card before mine expires? apple_pay_or_google_pay card_about_to_expire
990 Connection Timed Out apple_pay_or_google_pay beneficiary_not_allowed
971 please tell me how to change my pin. beneficiary_not_allowed change_pin

These are very clear label errors that cleanlab has identified in this data! Note that the given_label does not correctly reflect the intent of these requests, whoever produced this dataset made many mistakes that are important to address before modeling the data.

Outlier issues#

According to the report, our dataset contains some outliers. We can see which examples are outliers (and a numeric quality score quantifying how typical each example appears to be) via get_issues. We sort the resulting DataFrame by cleanlab’s outlier quality score to see the most severe outliers in our dataset.

[15]:
outlier_issues = lab.get_issues("outlier")
outlier_issues.sort_values("outlier_score").head()
[15]:
is_outlier_issue outlier_score
994 True 0.676322
999 True 0.686193
989 True 0.711223
433 True 0.711974
990 True 0.713793
[16]:
lowest_quality_outliers = outlier_issues["outlier_score"].argsort()[:5]

data.iloc[lowest_quality_outliers]
[16]:
text label
994 (A AND NOT B) OR (C AND NOT D) OR (B AND NOT C AND D) change_pin
999 636C65616E6C616220697320617765736F6D6521 cancel_transfer
989 <p><samp>File not found.<br>Press F1 to continue</samp></p> supported_cards_and_currencies
433 phone is gone lost_or_stolen_phone
990 Connection Timed Out apple_pay_or_google_pay

We see that cleanlab has identified entries in this dataset that do not appear to be proper customer requests. Outliers in this dataset appear to be out-of-scope customer requests and other nonsensical text which does not make sense for intent classification. Carefully consider whether such outliers may detrimentally affect your data modeling, and consider removing them from the dataset if so.

Near-duplicate issues#

According to the report, our dataset contains some sets of nearly duplicated examples. We can see which examples are (nearly) duplicated (and a numeric quality score quantifying how dissimilar each example is from its nearest neighbor in the dataset) via get_issues. We sort the resulting DataFrame by cleanlab’s near-duplicate quality score to see the text examples in our dataset that are most nearly duplicated.

[17]:
duplicate_issues = lab.get_issues("near_duplicate")
duplicate_issues.sort_values("near_duplicate_score").head()
[17]:
is_near_duplicate_issue near_duplicate_score near_duplicate_sets distance_to_nearest_neighbor
160 True 0.006237 [148, 219, 234, 118, 201, 223, 125, 140, 978, 172] 0.006237
148 True 0.006237 [160, 219, 234, 223, 140, 118, 201, 125, 229, 978] 0.006237
546 True 0.006485 [514, 523, 570, 569, 458, 528, 757, 137, 539, 827] 0.006485
514 True 0.006485 [546, 523, 570, 757, 761, 569, 528, 527, 458, 137] 0.006485
481 False 0.008165 [475, 493, 486, 849, 466, 845, 434, 795, 468, 834] 0.008165

The results above show which examples cleanlab considers nearly duplicated (rows where is_near_duplicate_issue == True). Here, we see that example 160 and 148 are nearly duplicated, as are example 546 and 514.

Let’s view these examples to see how similar they are.

[18]:
data.iloc[[160, 148]]
[18]:
text label
160 why was i charged an additional fee when paying with card? card_payment_fee_charged
148 why was i charged an extra fee when paying with card? card_payment_fee_charged
[19]:
data.iloc[[546, 514]]
[19]:
text label
546 do i have to go to the bank to change my pin? change_pin
514 do i have to go into the bank to change my pin? change_pin

We see that these two sets of request are indeed very similar to one another! Including near duplicates in a dataset may have unintended effects on models, and be wary about splitting them across training/test sets.

As demonstrated above, cleanlab can automatically shortlist the most likely issues in your dataset to help you better curate your dataset for subsequent modeling. With this shortlist, you can decide whether to fix these label issues or remove nonsensical or duplicated examples from your dataset to obtain a higher-quality dataset for training your next ML model. cleanlab’s issue detection can be run with outputs from any type of model you initially trained.