Text Classification with TensorFlow, Keras, and Cleanlab#

In this 5-minute quickstart tutorial, we use cleanlab to find potential label errors in a text classification dataset of IMDB movie reviews. This dataset contains 50,000 text reviews, each labeled with a binary sentiment polarity label indicating whether the review is positive (1) or negative (0). cleanlab will shortlist hundreds of examples that confuse our ML model the most; many of which are potential label errors, edge cases, or otherwise ambiguous examples.

Overview of what we’ll do in this tutorial:

  • Build a simple TensorFlow & Keras neural network and wrap it with cleanlab’s KerasWrapperSequential. This wrapper class makes any Keras/Tensorflow model compatible with scikit-learn (and some advanced cleanlab functionality like CleanLearning is easier to run with scikit-learn-compatible models).

  • Use CleanLearning to automatically compute out-of-sample preddicted probabilites and identify potential label errors with the find_label_issues method.

  • Train a more robust version of the same neural network after dropping the identified label errors using CleanLearning.

Quickstart

Already have an sklearn compatible model, data and given labels? Run the code below to train your model and get label issues using CleanLearning.

You can subsequently use the same CleanLearning object to train a more robust model (only trained on the clean data) by calling the .fit() method and passing in the label_issues found earlier.

from cleanlab.classification import CleanLearning

cl = CleanLearning(model)
label_issues = cl.find_label_issues(train_data, labels)  # identify mislabeled examples

cl.fit(train_data, labels, label_issues=label_issues)
preds = cl.predict(test_data)  # predictions from a version of your model
                               # trained on auto-cleaned data

Is your model/data not compatible with CleanLearning? You can instead run cross-validation on your model to get out-of-sample pred_probs. Then run the code below to get label issue indices ranked by their inferred severity.

from cleanlab.filter import find_label_issues

ranked_label_issues = find_label_issues(
    labels,
    pred_probs,
    return_indices_ranked_by="self_confidence",
)

1. Install required dependencies#

You can use pip to install all packages required for this tutorial as follows:

!pip install sklearn tensorflow tensorflow-datasets
!pip install cleanlab
# Make sure to install the version corresponding to this tutorial
# E.g. if viewing master branch documentation:
#     !pip install git+https://github.com/cleanlab/cleanlab.git
[2]:
import re
import string
import pandas as pd
from sklearn.metrics import accuracy_score, log_loss
from sklearn.model_selection import cross_val_predict
import tensorflow as tf
from tensorflow.keras import layers
import tensorflow_datasets as tfds

from cleanlab.classification import CleanLearning
from cleanlab.models.keras import KerasWrapperSequential

SEED = 123456  # for reproducibility

2. Load and preprocess the IMDb text dataset#

This dataset is provided in TensorFlow’s Datasets.

[4]:
%%capture
raw_train_ds = tfds.load(name="imdb_reviews", split="train", batch_size=-1, as_supervised=True)
raw_test_ds = tfds.load(name="imdb_reviews", split="test", batch_size=-1, as_supervised=True)

raw_train_texts, train_labels = tfds.as_numpy(raw_train_ds)
raw_test_texts, test_labels = tfds.as_numpy(raw_test_ds)
[5]:
num_classes = len(set(train_labels))
print(f"Classes: {set(train_labels)}")
Classes: {0, 1}

Let’s print the first example in the train set.

[6]:
i = 0
print(f"Example Label: {train_labels[i]}")
print(f"Example Text: {raw_train_texts[i]}")
Example Label: 0
Example Text: b"This was an absolutely terrible movie. Don't be lured in by Christopher Walken or Michael Ironside. Both are great actors, but this must simply be their worst role in history. Even their great acting could not redeem this movie's ridiculous storyline. This movie is an early nineties US propaganda piece. The most pathetic scenes were those when the Columbian rebels were making their cases for revolutions. Maria Conchita Alonso appeared phony, and her pseudo-love affair with Walken was nothing but a pathetic emotional plug in a movie that was devoid of any real meaning. I am disappointed that there are movies like this, ruining actor's like Christopher Walken's good name. I could barely sit through it."

The data is stored as two numpy arrays for each the train and test set:

  1. raw_train_texts and raw_test_texts for the movie reviews in text format,

  2. train_labels and test_labels for the labels.

Bringing Your Own Data (BYOD)?

You can easily replace the above with your own text dataset, and continue with the rest of the tutorial.

Your classes (and entries of train_labels / test_labels) should be represented as integer indices 0, 1, …, num_classes - 1. For example, if your dataset has 7 examples from 3 classes, train_labels might be: np.array([2,0,0,1,2,0,1])

Next, we have to convert the text strings into vectors which are better suited as inputs for neural networks.

The first step is to define a function to preprocess the text data by:

  1. Converting it to lower case

  2. Removing the HTML break tags: <br />

  3. Removing any punctuation marks

[7]:
def preprocess_text(input_data):
    lowercase = tf.strings.lower(input_data)
    stripped_html = tf.strings.regex_replace(lowercase, "<br />", " ")
    return tf.strings.regex_replace(stripped_html, f"[{re.escape(string.punctuation)}]", "")

Then, we use a TextVectorization layer to preprocess, tokenize, and vectorize our text data to a suitabable format for a neural network.

[8]:
max_features = 10000
sequence_length = 250

vectorize_layer = layers.TextVectorization(
    standardize=preprocess_text,
    max_tokens=max_features,
    output_mode="int",
    output_sequence_length=sequence_length,
)

Adapting vectorize_layer to the text data creates a mapping of each token (i.e. word) to an integer index. Note that we only adapt the vectorization on the train set, as it is standard ML practice.

Subsequently, we can vectorize our text data in the train and test sets by using this mapping.

[9]:
vectorize_layer.reset_state()
vectorize_layer.adapt(raw_train_texts)

train_texts = vectorize_layer(raw_train_texts).numpy()
test_texts = vectorize_layer(raw_test_texts).numpy()

Our subsequent neural network models will directly operate on elements of train_texts and test_texts in order to classify reviews.

3. Define a classification model and use cleanlab to find potential label errors#

Here, we build a simple neural network for classification with TensorFlow and Keras. We will also wrap it with cleanlab’s KerasWrapperSequential to make it compatible with sklearn (and henceCleanLearning). Note: you can wrap any existing Keras model this way, by just replacing keras.Sequential with KerasWrapperSequential in your code.

[10]:
def get_nn_model():
    # simply replace `keras.Sequential(` with cleanlab's class in this line to make any keras model sklearn-compatible
    # the rest of your existing keras code does not need to change at all
    model = KerasWrapperSequential(
        [
            tf.keras.Input(shape=(None,), dtype="int64"),
            layers.Embedding(max_features + 1, 16),
            layers.Dropout(0.2),
            layers.GlobalAveragePooling1D(),
            layers.Dropout(0.2),
            layers.Dense(num_classes),
            layers.Softmax()
        ],  # outputs probability that text belongs to class 1
        compile_kwargs= {
          "optimizer":"adam",
          "loss":tf.keras.losses.SparseCategoricalCrossentropy(),
          "metrics":tf.keras.metrics.CategoricalAccuracy(),
        },
    )

    return model

We can define the CleanLearning object with the neural network model and use find_label_issues to identify potential label errors.

CleanLearning provides a wrapper class that can easily be applied to any scikit-learn compatible model, which can be used to find potential label issues and train a more robust model if the original data contains noisy labels.

[11]:
cv_n_folds = 3  # for efficiency; values like 5 or 10 will generally work better
num_epochs = 15
[12]:
model = get_nn_model()
cl = CleanLearning(model, cv_n_folds=cv_n_folds)
[13]:
label_issues = cl.find_label_issues(X=train_texts, labels=train_labels, clf_kwargs={"epochs": num_epochs})
Epoch 1/15
521/521 [==============================] - 4s 5ms/step - loss: 0.6615 - categorical_accuracy: 0.5355
Epoch 2/15
521/521 [==============================] - 3s 5ms/step - loss: 0.5247 - categorical_accuracy: 0.4837
Epoch 3/15
521/521 [==============================] - 3s 5ms/step - loss: 0.4121 - categorical_accuracy: 0.4867
Epoch 4/15
521/521 [==============================] - 3s 5ms/step - loss: 0.3480 - categorical_accuracy: 0.4915
Epoch 5/15
521/521 [==============================] - 3s 5ms/step - loss: 0.3080 - categorical_accuracy: 0.4928
Epoch 6/15
521/521 [==============================] - 3s 5ms/step - loss: 0.2767 - categorical_accuracy: 0.4940
Epoch 7/15
521/521 [==============================] - 3s 5ms/step - loss: 0.2528 - categorical_accuracy: 0.4941
Epoch 8/15
521/521 [==============================] - 3s 5ms/step - loss: 0.2339 - categorical_accuracy: 0.4936
Epoch 9/15
521/521 [==============================] - 3s 5ms/step - loss: 0.2169 - categorical_accuracy: 0.4948
Epoch 10/15
521/521 [==============================] - 3s 5ms/step - loss: 0.2014 - categorical_accuracy: 0.4944
Epoch 11/15
521/521 [==============================] - 3s 6ms/step - loss: 0.1879 - categorical_accuracy: 0.4967
Epoch 12/15
521/521 [==============================] - 3s 5ms/step - loss: 0.1751 - categorical_accuracy: 0.4963
Epoch 13/15
521/521 [==============================] - 3s 5ms/step - loss: 0.1647 - categorical_accuracy: 0.4961
Epoch 14/15
521/521 [==============================] - 3s 5ms/step - loss: 0.1537 - categorical_accuracy: 0.4965
Epoch 15/15
521/521 [==============================] - 3s 5ms/step - loss: 0.1440 - categorical_accuracy: 0.4973
261/261 [==============================] - 1s 2ms/step
Epoch 1/15
521/521 [==============================] - 3s 5ms/step - loss: 0.6578 - categorical_accuracy: 0.5150
Epoch 2/15
521/521 [==============================] - 3s 5ms/step - loss: 0.5194 - categorical_accuracy: 0.4768
Epoch 3/15
521/521 [==============================] - 3s 5ms/step - loss: 0.4102 - categorical_accuracy: 0.4920
Epoch 4/15
521/521 [==============================] - 3s 5ms/step - loss: 0.3457 - categorical_accuracy: 0.4948
Epoch 5/15
521/521 [==============================] - 3s 5ms/step - loss: 0.3059 - categorical_accuracy: 0.4942
Epoch 6/15
521/521 [==============================] - 3s 5ms/step - loss: 0.2758 - categorical_accuracy: 0.4933
Epoch 7/15
521/521 [==============================] - 3s 5ms/step - loss: 0.2530 - categorical_accuracy: 0.4958
Epoch 8/15
521/521 [==============================] - 3s 5ms/step - loss: 0.2338 - categorical_accuracy: 0.4955
Epoch 9/15
521/521 [==============================] - 3s 5ms/step - loss: 0.2167 - categorical_accuracy: 0.4953
Epoch 10/15
521/521 [==============================] - 3s 5ms/step - loss: 0.2024 - categorical_accuracy: 0.4958
Epoch 11/15
521/521 [==============================] - 3s 5ms/step - loss: 0.1897 - categorical_accuracy: 0.4960
Epoch 12/15
521/521 [==============================] - 3s 5ms/step - loss: 0.1783 - categorical_accuracy: 0.4949
Epoch 13/15
521/521 [==============================] - 3s 5ms/step - loss: 0.1659 - categorical_accuracy: 0.4970
Epoch 14/15
521/521 [==============================] - 3s 5ms/step - loss: 0.1568 - categorical_accuracy: 0.4962
Epoch 15/15
521/521 [==============================] - 3s 5ms/step - loss: 0.1468 - categorical_accuracy: 0.4980
261/261 [==============================] - 0s 2ms/step
Epoch 1/15
521/521 [==============================] - 3s 5ms/step - loss: 0.6590 - categorical_accuracy: 0.5271
Epoch 2/15
521/521 [==============================] - 3s 5ms/step - loss: 0.5226 - categorical_accuracy: 0.4923
Epoch 3/15
521/521 [==============================] - 3s 5ms/step - loss: 0.4104 - categorical_accuracy: 0.4893
Epoch 4/15
521/521 [==============================] - 3s 5ms/step - loss: 0.3463 - categorical_accuracy: 0.4903
Epoch 5/15
521/521 [==============================] - 3s 5ms/step - loss: 0.3052 - categorical_accuracy: 0.4946
Epoch 6/15
521/521 [==============================] - 3s 6ms/step - loss: 0.2746 - categorical_accuracy: 0.4962
Epoch 7/15
521/521 [==============================] - 3s 5ms/step - loss: 0.2512 - categorical_accuracy: 0.4963
Epoch 8/15
521/521 [==============================] - 3s 5ms/step - loss: 0.2325 - categorical_accuracy: 0.4974
Epoch 9/15
521/521 [==============================] - 3s 6ms/step - loss: 0.2151 - categorical_accuracy: 0.4974
Epoch 10/15
521/521 [==============================] - 3s 5ms/step - loss: 0.2000 - categorical_accuracy: 0.4972
Epoch 11/15
521/521 [==============================] - 3s 6ms/step - loss: 0.1875 - categorical_accuracy: 0.4967
Epoch 12/15
521/521 [==============================] - 3s 5ms/step - loss: 0.1746 - categorical_accuracy: 0.4988
Epoch 13/15
521/521 [==============================] - 3s 5ms/step - loss: 0.1633 - categorical_accuracy: 0.4977
Epoch 14/15
521/521 [==============================] - 3s 5ms/step - loss: 0.1536 - categorical_accuracy: 0.4978
Epoch 15/15
521/521 [==============================] - 3s 5ms/step - loss: 0.1442 - categorical_accuracy: 0.4984
261/261 [==============================] - 0s 2ms/step

The find_label_issues method above will perform cross validation to compute out-of-sample predicted probabilites for each example, which is used to identify label issues.

This method returns a dataframe containing a label quality score for each example. These numeric scores lie between 0 and 1, where lower scores indicate examples more likely to be mislabeled. The dataframe also contains a boolean column specifying whether or not each example is identified to have a label issue (indicating it is likely mislabeled).

[14]:
label_issues.head()
[14]:
is_label_issue label_quality given_label predicted_label
0 False 0.730809 0 0
1 False 0.717022 0 0
2 True 0.284340 0 1
3 False 0.727985 1 1
4 False 0.528301 1 1

We can get the subset of examples flagged with label issues, and also sort by label quality score to find the indices of the 10 most likely mislabeled examples in our dataset.

[15]:
identified_issues = label_issues[label_issues["is_label_issue"] == True]
lowest_quality_labels = label_issues["label_quality"].argsort()[:10].to_numpy()
[16]:
print(
    f"cleanlab found {len(identified_issues)} potential label errors in the dataset.\n"
    f"Here are indices of the top 10 most likely errors: \n {lowest_quality_labels}"
)
cleanlab found 1504 potential label errors in the dataset.
Here are indices of the top 10 most likely errors:
 [22294  5204 15079 21889 10676 11186 15174 10589 18928 21492]

Let’s review some of the most likely label errors:

To help us inspect these datapoints, we define a method to print any example from the dataset. We then display some of the top-ranked label issues identified by cleanlab:

[17]:
def print_as_df(index):
    return pd.DataFrame(
        {"texts": raw_train_texts[index], "labels": train_labels[index]},
        [index]
    )

Here’s a review labeled as positive (1), but it should be negative (0). Some noteworthy snippets extracted from the review text:

  • “…incredibly awful score…”

  • “…worst Foley work ever done.”

  • “…script is incomprehensible…”

  • “…editing is just bizarre.”

  • “…atrocious pan and scan…”

  • “…incoherent mess…”

  • “…amateur directing there.”

[18]:
print_as_df(22294)
[18]:
texts labels
22294 b'This movie is stuffed full of stock Horror movie goodies: chained lunatics, pre-meditated murder, a mad (vaguely lesbian) female scientist with an even madder father who wears a mask because of his horrible disfigurement, poisoning, spooky castles, werewolves (male and female), adultery, slain lovers, Tibetan mystics, the half-man/half-plant victim of some unnamed experiment, grave robbing, mind control, walled up bodies, a car crash on a lonely road, electrocution, knights in armour - the lot, all topped off with an incredibly awful score and some of the worst Foley work ever done.<br /><br />The script is incomprehensible (even by badly dubbed Spanish Horror movie standards) and some of the editing is just bizarre. In one scene where the lead female evil scientist goes to visit our heroine in her bedroom for one of the badly dubbed: "That is fantastical. I do not understand. Explain to me again how this is..." exposition scenes that litter this movie, there is a sudden hand held cutaway of the girl\'s thighs as she gets out of bed for no apparent reason at all other than to cover a cut in the bad scientist\'s "Mwahaha! All your werewolfs belong mine!" speech. Though why they went to the bother I don\'t know because there are plenty of other jarring jump cuts all over the place - even allowing for the atrocious pan and scan of the print I saw.<br /><br />The Director was, according to one interview with the star, drunk for most of the shoot and the film looks like it. It is an incoherent mess. It\'s made even more incoherent by the inclusion of werewolf rampage footage from a different film The Mark of the Wolf Man (made 4 years earlier, featuring the same actor but playing the part with more aggression and with a different shirt and make up - IS there a word in Spanish for "Continuity"?) and more padding of another actor in the wolfman get-up ambling about in long shot.<br /><br />The music is incredibly bad varying almost at random from full orchestral creepy house music, to bosannova, to the longest piano and gong duet ever recorded. (Thinking about it, it might not have been a duet. It might have been a solo. The piano part was so simple it could have been picked out with one hand while the player whacked away at the gong with the other.) <br /><br />This is one of the most bewilderedly trance-state inducing bad movies of the year so far for me. Enjoy.<br /><br />Favourite line: "Ilona! This madness and perversity will turn against you!" How true.<br /><br />Favourite shot: The lover, discovering his girlfriend slain, dropping the candle in a cartoon-like demonstration of surprise. Rank amateur directing there.' 1

Here’s a review labeled as positive (1), but it should be negative (0). Some noteworthy snippets extracted from the review text:

  • “…film seems cheap.”

  • “…unbelievably bad…”

  • “…cinematography is badly lit…”

  • “…everything looking grainy and ugly.”

  • “…sound is so terrible…”

[19]:
print_as_df(5204)
[19]:
texts labels
5204 b'This low-budget erotic thriller that has some good points, but a lot more bad one. The plot revolves around a female lawyer trying to clear her lover who is accused of murdering his wife. Being a soft-core film, that entails her going undercover at a strip club and having sex with possible suspects. As plots go for this type of genre, not to bad. The script is okay, and the story makes enough sense for someone up at 2 AM watching this not to notice too many plot holes. But everything else in the film seems cheap. The lead actors aren\'t that bad, but pretty much all the supporting ones are unbelievably bad (one girl seems like she is drunk and/or high). The cinematography is badly lit, with everything looking grainy and ugly. The sound is so terrible that you can barely hear what people are saying. The worst thing in this movie is the reason you\'re watching it-the sex. The reason people watch these things is for hot sex scenes featuring really hot girls in Red Shoe Diary situations. The sex scenes aren\'t hot they\'re sleazy, shot in that porno style where everything is just a master shot of two people going at it. The woman also look like they are refuges from a porn shoot. I\'m not trying to be rude or mean here, but they all have that breast implants and a burned out/weathered look. Even the title, "Deviant Obsession", sounds like a Hardcore flick. Not that I don\'t have anything against porn - in fact I love it. But I want my soft-core and my hard-core separate. What ever happened to actresses like Shannon Tweed, Jacqueline Lovell, Shannon Whirry and Kim Dawson? Women that could act and who would totally arouse you? And what happened to B erotic thrillers like Body Chemistry, Nighteyes and even Stripped to Kill. Sure, none of these where masterpieces, but at least they felt like movies. Plus, they were pushing the envelope, going beyond Hollywood\'s relatively prude stance on sex, sexual obsessions and perversions. Now they just make hard-core films without the hard-core sex.' 1

Here’s a review labeled as positive (1), but it should be negative (0). Some noteworthy snippets extracted from the review text:

  • “…hard to imagine a boring shark movie…”

  • Poor focus in some scenes made the production seems amateurish.”

  • “…do nothing to take advantage of…”

  • “…far too few scenes of any depth or variety.”

  • “…just look flat…no contrast of depth…”

  • “…introspective and dull…constant disappointment.”

[20]:
print_as_df(15079)
[20]:
texts labels
15079 b'Like the gentle giants that make up the latter half of this film\'s title, Michael Oblowitz\'s latest production has grace, but it\'s also slow and ponderous. The producer\'s last outing, "Mosquitoman-3D" had the same problem. It\'s hard to imagine a boring shark movie, but they somehow managed it. The only draw for Hammerhead: Shark Frenzy was it\'s passable animatronix, which is always fun when dealing with wondrous worlds beneath the ocean\'s surface. But even that was only passable. Poor focus in some scenes made the production seems amateurish. With Dolphins and Whales, the technology is all but wasted. Cloudy scenes and too many close-ups of the film\'s giant subjects do nothing to take advantage of IMAX\'s stunning 3D capabilities. There are far too few scenes of any depth or variety. Close-ups of these awesome creatures just look flat and there is often only one creature in the cameras field, so there is no contrast of depth. Michael Oblowitz is trying to follow in his father\'s footsteps, but when you\'ve got Shark-Week on cable, his introspective and dull treatment of his subjects is a constant disappointment.' 1

cleanlab has shortlisted the most likely label errors to speed up your data cleaning process. With this list, you can decide whether to fix these label issues or remove ambiguous examples from the dataset.

4. Train a more robust model from noisy labels#

Fixing the label issues manually may be time-consuming, but cleanlab can filter these noisy examples and train a model on the remaining clean data for you automatically.

To establish a baseline, let’s first train and evaluate our original neural network model.

[21]:
baseline_model = get_nn_model()  # note we first re-instantiate the model
baseline_model.fit(X=train_texts, y=train_labels, epochs=num_epochs)
Epoch 1/15
782/782 [==============================] - 5s 6ms/step - loss: 0.6265 - categorical_accuracy: 0.4423
Epoch 2/15
782/782 [==============================] - 4s 6ms/step - loss: 0.4452 - categorical_accuracy: 0.4865
Epoch 3/15
782/782 [==============================] - 4s 5ms/step - loss: 0.3485 - categorical_accuracy: 0.4920
Epoch 4/15
782/782 [==============================] - 4s 5ms/step - loss: 0.3000 - categorical_accuracy: 0.4941
Epoch 5/15
782/782 [==============================] - 4s 5ms/step - loss: 0.2680 - categorical_accuracy: 0.4940
Epoch 6/15
782/782 [==============================] - 4s 5ms/step - loss: 0.2449 - categorical_accuracy: 0.4959
Epoch 7/15
782/782 [==============================] - 4s 5ms/step - loss: 0.2262 - categorical_accuracy: 0.4936
Epoch 8/15
782/782 [==============================] - 4s 5ms/step - loss: 0.2096 - categorical_accuracy: 0.4964
Epoch 9/15
782/782 [==============================] - 4s 5ms/step - loss: 0.1974 - categorical_accuracy: 0.4956
Epoch 10/15
782/782 [==============================] - 4s 5ms/step - loss: 0.1839 - categorical_accuracy: 0.4973
Epoch 11/15
782/782 [==============================] - 4s 5ms/step - loss: 0.1733 - categorical_accuracy: 0.4962
Epoch 12/15
782/782 [==============================] - 4s 5ms/step - loss: 0.1635 - categorical_accuracy: 0.4979
Epoch 13/15
782/782 [==============================] - 4s 5ms/step - loss: 0.1546 - categorical_accuracy: 0.4969
Epoch 14/15
782/782 [==============================] - 4s 5ms/step - loss: 0.1468 - categorical_accuracy: 0.4978
Epoch 15/15
782/782 [==============================] - 4s 5ms/step - loss: 0.1393 - categorical_accuracy: 0.4982
[22]:
preds = baseline_model.predict(test_texts)
acc_og = accuracy_score(test_labels, preds)
print(f"\n Test accuracy of original neural net: {acc_og}")
782/782 [==============================] - 1s 2ms/step

 Test accuracy of original neural net: 0.86436

Now that we have a baseline, let’s check if using CleanLearning improves our test accuracy.

CleanLearning provides a wrapper that can be applied to any scikit-learn compatible model. The resulting model object can be used in the same manner, but it will now train more robustly if the data has noisy labels.

We can use the same CleanLearning object defined above, and pass the label issues we already computed into .fit() via the label_issues argument. This accelerates things; if we did not provide the label issues, then they would be recomputed via cross-validation. After that CleanLearning simply deletes the examples with label issues and retrains your model on the remaining data.

[23]:
cl.fit(X=train_texts, labels=train_labels, label_issues=cl.get_label_issues(), clf_kwargs={"epochs": num_epochs})
Epoch 1/15
735/735 [==============================] - 4s 5ms/step - loss: 0.6152 - categorical_accuracy: 0.4491
Epoch 2/15
735/735 [==============================] - 4s 5ms/step - loss: 0.3965 - categorical_accuracy: 0.4859
Epoch 3/15
735/735 [==============================] - 4s 5ms/step - loss: 0.2758 - categorical_accuracy: 0.4925
Epoch 4/15
735/735 [==============================] - 4s 5ms/step - loss: 0.2128 - categorical_accuracy: 0.4931
Epoch 5/15
735/735 [==============================] - 4s 5ms/step - loss: 0.1716 - categorical_accuracy: 0.4947
Epoch 6/15
735/735 [==============================] - 4s 5ms/step - loss: 0.1426 - categorical_accuracy: 0.4952
Epoch 7/15
735/735 [==============================] - 4s 5ms/step - loss: 0.1202 - categorical_accuracy: 0.4961
Epoch 8/15
735/735 [==============================] - 4s 5ms/step - loss: 0.1025 - categorical_accuracy: 0.4963
Epoch 9/15
735/735 [==============================] - 4s 5ms/step - loss: 0.0877 - categorical_accuracy: 0.4970
Epoch 10/15
735/735 [==============================] - 4s 5ms/step - loss: 0.0754 - categorical_accuracy: 0.4971
Epoch 11/15
735/735 [==============================] - 4s 5ms/step - loss: 0.0651 - categorical_accuracy: 0.4969
Epoch 12/15
735/735 [==============================] - 4s 5ms/step - loss: 0.0567 - categorical_accuracy: 0.4980
Epoch 13/15
735/735 [==============================] - 4s 5ms/step - loss: 0.0500 - categorical_accuracy: 0.4974
Epoch 14/15
735/735 [==============================] - 4s 5ms/step - loss: 0.0449 - categorical_accuracy: 0.4979
Epoch 15/15
735/735 [==============================] - 4s 5ms/step - loss: 0.0382 - categorical_accuracy: 0.4990
[23]:
CleanLearning(clf=<cleanlab.models.keras.KerasWrapperSequential object at 0x7f459ade7670>,
              cv_n_folds=3,
              find_label_issues_kwargs={'min_examples_per_class': 10})
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
[24]:
pred_labels = cl.predict(test_texts)
acc_cl = accuracy_score(test_labels, pred_labels)
print(f"Test accuracy of cleanlab's neural net: {acc_cl}")
782/782 [==============================] - 1s 2ms/step
Test accuracy of cleanlab's neural net: 0.87296

We can see that the test set accuracy slightly improved as a result of the data cleaning. Note that this will not always be the case, especially when we are evaluating on test data that are themselves noisy. The best practice is to run cleanlab to identify potential label issues and then manually review them, before blindly trusting any accuracy metrics. In particular, the most effort should be made to ensure high-quality test data, which is supposed to reflect the expected performance of our model during deployment.