Image Classification with PyTorch and Datalab#
This 5-minute quickstart tutorial demonstrates how to find potential label errors in image classification data. Here we use the MNIST dataset containing 70,000 images of handwritten digits from 0 to 9.
Overview of what we’ll do in this tutorial:
Build a simple PyTorch neural net and wrap it with skorch to make it scikit-learn compatible.
Use this model to compute out-of-sample predicted probabilities,
pred_probs
, via cross-validation.Use these predictions to estimate which images in the dataset are mislabeled via cleanlab’s
Datalab
class.
Quickstart
Already have a model
? Run cross-validation to get out-of-sample pred_probs
and then the code below to find any potential label errors in your dataset.
from cleanlab import Datalab
lab = Datalab(data=your_dataset, label_name="column_name_of_labels")
lab.find_issues(pred_probs=your_pred_probs, issue_types={"label":{}})
lab.get_issues("label")
1. Install and import required dependencies#
You can use pip
to install all packages required for this tutorial as follows:
!pip install matplotlib torch torchvision skorch datasets
!pip install "cleanlab[datalab]"
# Make sure to install the version corresponding to this tutorial
# E.g. if viewing master branch documentation:
# !pip install git+https://github.com/cleanlab/cleanlab.git
[2]:
import torch
from torch import nn
from sklearn.datasets import fetch_openml
from sklearn.model_selection import cross_val_predict
from sklearn.metrics import accuracy_score
from skorch import NeuralNetClassifier
2. Fetch and scale the MNIST dataset#
[4]:
mnist = fetch_openml("mnist_784") # Fetch the MNIST dataset
X = mnist.data.astype("float32").to_numpy() # 2D array (images are flattened into 1D)
X /= 255.0 # Scale the features to the [0, 1] range
X = X.reshape(len(X), 1, 28, 28) # reshape into [N, C, H, W] for PyTorch
labels = mnist.target.astype("int64").to_numpy() # 1D array of given labels
/opt/hostedtoolcache/Python/3.10.11/x64/lib/python3.10/site-packages/sklearn/datasets/_openml.py:968: FutureWarning: The default value of `parser` will change from `'liac-arff'` to `'auto'` in 1.4. You can set `parser='auto'` to silence this warning. Therefore, an `ImportError` will be raised from 1.4 if the dataset is dense and pandas is not installed. Note that the pandas parser may return different data types. See the Notes Section in fetch_openml's API doc for details.
warn(
Bringing Your Own Data (BYOD)?
Assign your data’s features to variable X
and its labels to variable labels
instead, and continue with the rest of the tutorial.
3. Define a classification model#
Here, we define a simple neural network with PyTorch.
[5]:
class ClassifierModule(nn.Module):
def __init__(self):
super().__init__()
self.cnn = nn.Sequential(
nn.Conv2d(1, 6, 3),
nn.ReLU(),
nn.BatchNorm2d(6),
nn.MaxPool2d(kernel_size=2, stride=2),
nn.Conv2d(6, 16, 3),
nn.ReLU(),
nn.BatchNorm2d(16),
nn.MaxPool2d(kernel_size=2, stride=2),
)
self.out = nn.Sequential(
nn.Flatten(),
nn.LazyLinear(128),
nn.ReLU(),
nn.Linear(128, 10),
nn.Softmax(dim=-1),
)
def forward(self, X):
X = self.cnn(X)
X = self.out(X)
return X
4. Ensure your classifier is scikit-learn compatible#
As some cleanlab features require scikit-learn compatibility, we adapt the above PyTorch neural net accordingly. skorch is a convenient package that helps with this. Alternatively, you can also easily wrap an arbitrary model to be scikit-learn compatible as demonstrated here.
[6]:
model_skorch = NeuralNetClassifier(ClassifierModule)
5. Compute out-of-sample predicted probabilities#
If we’d like cleanlab to identify potential label errors in the whole dataset and not just the training set, we can consider using the entire dataset when computing the out-of-sample predicted probabilities, pred_probs
, via cross-validation.
[7]:
num_crossval_folds = 3 # for efficiency; values like 5 or 10 will generally work better
pred_probs = cross_val_predict(
model_skorch,
X,
labels,
cv=num_crossval_folds,
method="predict_proba",
)
epoch train_loss valid_acc valid_loss dur
------- ------------ ----------- ------------ ------
1 0.6908 0.9139 0.3099 3.3222
2 0.2112 0.9412 0.2002 3.2999
3 0.1521 0.9516 0.1574 3.2306
4 0.1240 0.9594 0.1332 3.2935
5 0.1066 0.9633 0.1178 3.1699
6 0.0948 0.9660 0.1072 3.2447
7 0.0860 0.9682 0.0994 3.1594
8 0.0792 0.9708 0.0934 3.1635
9 0.0737 0.9725 0.0886 3.2246
10 0.0690 0.9736 0.0847 3.3557
epoch train_loss valid_acc valid_loss dur
------- ------------ ----------- ------------ ------
1 0.7043 0.9247 0.2786 3.3219
2 0.1907 0.9465 0.1817 3.2504
3 0.1355 0.9556 0.1477 3.1654
4 0.1100 0.9616 0.1289 3.1749
5 0.0943 0.9648 0.1166 3.1338
6 0.0834 0.9684 0.1079 3.1347
7 0.0751 0.9702 0.1014 3.1150
8 0.0687 0.9713 0.0963 3.1314
9 0.0634 0.9724 0.0921 3.1532
10 0.0589 0.9732 0.0887 3.2025
epoch train_loss valid_acc valid_loss dur
------- ------------ ----------- ------------ ------
1 0.7931 0.9112 0.3372 3.2214
2 0.2282 0.9486 0.1948 3.2156
3 0.1533 0.9592 0.1501 3.2241
4 0.1217 0.9641 0.1277 3.2478
5 0.1032 0.9678 0.1135 3.1945
6 0.0903 0.9701 0.1037 3.2051
7 0.0809 0.9729 0.0964 3.2327
8 0.0736 0.9747 0.0903 3.2051
9 0.0677 0.9761 0.0861 3.2540
10 0.0630 0.9766 0.0825 3.1857
An additional benefit of cross-validation is that it facilitates more reliable evaluation of our model than a single training/validation split.
[8]:
predicted_labels = pred_probs.argmax(axis=1)
acc = accuracy_score(labels, predicted_labels)
print(f"Cross-validated estimate of accuracy on held-out data: {acc}")
Cross-validated estimate of accuracy on held-out data: 0.9752428571428572
6. Use cleanlab to find label issues#
Based on the given labels and out-of-sample predicted probabilities, cleanlab can quickly help us identify label issues in our dataset.
Here, we use cleanlab’s Datalab
to find potential label errors in our data. Datalab
has several ways of loading the data. In this case, we’ll simply wrap the training features and noisy labels in a dictionary. We can instantiate our Datalab
object with the dictionary created, and then pass in the model predicted probabilities we obtained above, and specify that we want to look for label errors by specifying that using the issue_types
argument.
[9]:
from cleanlab import Datalab
data = {"X": X, "y": labels}
lab = Datalab(data, label_name="y")
lab.find_issues(pred_probs=pred_probs, issue_types={"label":{}})
Finding label issues ...
Audit complete. 143 issues found in the dataset.
After the audit is complete, we can view the results and information regarding the labels using Datalab’s get_issues
method.
[10]:
issue_results = lab.get_issues("label")
issue_results.head()
[10]:
is_label_issue | label_score | given_label | predicted_label | |
---|---|---|---|---|
0 | False | 0.828540 | 5 | 5 |
1 | False | 0.999814 | 0 | 0 |
2 | False | 0.992302 | 4 | 4 |
3 | False | 0.997823 | 1 | 1 |
4 | False | 0.990165 | 9 | 9 |
The dataframe above contains a label quality score for each example. These numeric scores lie between 0 and 1, where lower scores indicate examples more likely to be mislabeled. It contains a boolean column specifying whether or not each example is identified to have a label issue (indicating it is likely mislabeled).
We can sort the results obtained by label score to find the indices of the 15 most likely mislabeled examples in our dataset.
[11]:
ranked_label_issues = issue_results.sort_values("label_score").index
print(f"Top 15 most likely label errors: \n {ranked_label_issues.values[:15]}")
Top 15 most likely label errors:
[59915 24798 19124 53216 2720 59701 8200 50340 21601 32776 56014 2676
35480 46726 7010]
ranked_label_issues
is a list of indices ranked by the label score of each example, the top indices in the list corresponding to examples that are worth inspecting more closely. To help visualize specific examples, we define a plot_examples
function (can skip these details).
See the implementation of plot_examples
(click to expand)
plot_examples
(click to expand)# Note: This pulldown content is for docs.cleanlab.ai, if running on local Jupyter or Colab, please ignore it.
import matplotlib.pyplot as plt
def plot_examples(id_iter, nrows=1, ncols=1):
for count, id in enumerate(id_iter):
plt.subplot(nrows, ncols, count + 1)
plt.imshow(X[id].reshape(28, 28), cmap="gray")
plt.title(f"id: {id} \n label: {y[id]}")
plt.axis("off")
plt.tight_layout(h_pad=2.0)
Let’s look at the top 15 examples cleanlab thinks are most likely to be incorrectly labeled. We can see a few label errors and odd edge cases. Feel free to change the values below to display more/fewer examples.
[13]:
plot_examples(ranked_label_issues[range(15)], 3, 5)
Let’s zoom into some specific examples from the above set:
Given label is 4 but looks more like a 7:
[14]:
plot_examples([59915])
Given label is 4 but also looks like 9:
[15]:
plot_examples([24798])
A very odd looking 5:
[16]:
plot_examples([59701])
Given label is 3 but could be a 7:
[17]:
plot_examples([50340])
cleanlab has shortlisted the most likely label errors to speed up your data cleaning process. With this list, you can decide whether to fix label issues or prune some of these examples from the dataset.
You can see that even widely-used datasets like MNIST contain problematic labels. Never blindly trust your data! You should always check it for potential issues, many of which can be easily identified by cleanlab.