Find Label Errors in Token Classification (Text) Datasets#
This 5-minute quickstart tutorial shows how you can use cleanlab to find potential label errors in text datasets for token classification. In token-classification, our data consists of a bunch of sentences (aka documents) in which every token (aka word) is labeled with one of K classes, and we train models to predict the class of each token in a new sentence. Example applications in NLP include part-of-speech-tagging or entity recognition, which is the focus on this tutorial. Here we use the CoNLL-2003 named entity recognition dataset which contains around 20,000 sentences with 300,000 individual tokens. Each token is labeled with one of the following classes:
LOC (location entity)
PER (person entity)
ORG (organization entity)
MISC (miscellaneous other type of entity)
O (other type of word that does not correspond to an entity)
Overview of what we’ll do in this tutorial:
Find tokens with label issues using
cleanlab.token_classification.filter.find_label_issues
.Rank sentences based on their overall label quality using
cleanlab.token_classification.rank.get_label_quality_scores
.
Quickstart
cleanlab uses three inputs to handle token classification data:
tokens
: List whosei
-th element is a list of strings/words corresponding to tokenized version of thei
-th sentence in dataset. Example:[..., ["I", "love", "cleanlab"], ...]
labels
: List whosei
-th element is a list of integers corresponding to class labels of each token in thei
-th sentence. Example:[..., [0, 0, 1], ...]
pred_probs
: List whosei
-th element is a np.ndarray of shape(N_i, K)
corresponding to predicted class probabilities for each token in thei
-th sentence (assuming this sentence containsN_i
tokens and dataset hasK
possible classes). These should be out-of-samplepred_probs
obtained from a token classification model via cross-validation. Example:[..., np.array([[0.8,0.2], [0.9,0.1], [0.3,0.7]]), ...]
Using these, you can find/display label issues with this code:
from cleanlab.token_classification.filter import find_label_issues
from cleanlab.token_classification.summary import display_issues
issues = find_label_issues(labels, pred_probs)
display_issues(issues, tokens, pred_probs=pred_probs, labels=labels,
class_names=OPTIONAL_LIST_OF_ORDERED_CLASS_NAMES)
1. Install required dependencies and download data#
You can use pip
to install all packages required for this tutorial as follows:
!pip install cleanlab
[1]:
!wget -nc https://data.deepai.org/conll2003.zip && mkdir data
!unzip conll2003.zip -d data/ && rm conll2003.zip
!wget -nc 'https://cleanlab-public.s3.amazonaws.com/TokenClassification/pred_probs.npz'
--2024-03-08 16:34:54-- https://data.deepai.org/conll2003.zip
Resolving data.deepai.org (data.deepai.org)... 143.244.49.178, 2400:52e0:1a01::899:1
Connecting to data.deepai.org (data.deepai.org)|143.244.49.178|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 982975 (960K) [application/zip]
Saving to: ‘conll2003.zip’
conll2003.zip 100%[===================>] 959.94K --.-KB/s in 0.05s
2024-03-08 16:34:55 (17.6 MB/s) - ‘conll2003.zip’ saved [982975/982975]
mkdir: cannot create directory ‘data’: File exists
Archive: conll2003.zip
inflating: data/metadata
inflating: data/test.txt
inflating: data/train.txt
inflating: data/valid.txt
--2024-03-08 16:34:55-- https://cleanlab-public.s3.amazonaws.com/TokenClassification/pred_probs.npz
Resolving cleanlab-public.s3.amazonaws.com (cleanlab-public.s3.amazonaws.com)... 3.5.29.253, 52.216.219.185, 52.217.191.25, ...
Connecting to cleanlab-public.s3.amazonaws.com (cleanlab-public.s3.amazonaws.com)|3.5.29.253|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 17045998 (16M) [binary/octet-stream]
Saving to: ‘pred_probs.npz’
pred_probs.npz 100%[===================>] 16.26M 5.63MB/s in 2.9s
2024-03-08 16:34:58 (5.63 MB/s) - ‘pred_probs.npz’ saved [17045998/17045998]
[3]:
import numpy as np
from cleanlab.token_classification.filter import find_label_issues
from cleanlab.token_classification.rank import get_label_quality_scores, issues_from_scores
from cleanlab.internal.token_classification_utils import get_sentence, filter_sentence, mapping
from cleanlab.token_classification.summary import display_issues, common_label_issues, filter_by_token
np.set_printoptions(suppress=True)
2. Get data, labels, and pred_probs#
In token classification tasks, each token in the dataset is labeled with one of K possible classes. To find label issues, cleanlab requires predicted class probabilities from a trained classifier. These pred_probs
contain a length-K vector for each token in the dataset (which sums to 1 for each token). Here we use pred_probs
which are out-of-sample predicted class probabilities for the full CoNLL-2003 dataset (merging training, development, and testing splits), obtained from a
BERT Transformer fit via cross-validation. Our example notebook “Training Entity Recognition Model for Token Classification” contains the code to produce such pred_probs
and save them in a .npz
file, which we simply load here via a read_npz
function (can skip these details).
See the code for reading the .npz
file (click to expand)
.npz
file (click to expand)# Note: This pulldown content is for docs.cleanlab.ai, if running on local Jupyter or Colab, please ignore it.
def read_npz(filepath):
data = dict(np.load(filepath))
data = [data[str(i)] for i in range(len(data))]
return data
[5]:
pred_probs = read_npz('pred_probs.npz')
pred_probs
is a list of numpy arrays, which we’ll describe later. Let’s first also load the dataset and its labels. We collect sentences from the original text files defining:
tokens
as a nested list wheretokens[i]
is a list of strings corrsesponding to a (word-level) tokenized version of thei
-th sentencegiven_labels
as a nested list of the given labels in the dataset wheregiven_labels[i]
is a list of labels for each token in thei
-th sentence.
This version of CoNLL-2003 uses IOB2-formatting for tagging, where B-
and I-
prefixes in the class labels indicate whether the tokens are at the start of an entity or in the middle. We ignore these distinctions in this tutorial (as label errors that confuse B-
and I-
are less interesting), and thus have two sets of entities:
given_entities
= [‘O’, ‘B-MISC’, ‘I-MISC’, ‘B-PER’, ‘I-PER’, ‘B-ORG’, ‘I-ORG’, ‘B-LOC’, ‘I-LOC’]entities
= [‘O’, ‘MISC’, ‘PER’, ‘ORG’, ‘LOC’]. These are our classes of interest for the token classification task.
We use some helper methods to load the CoNLL data (can skip these details).
See the code for reading the CoNLL data files (click to expand)
# Note: This pulldown content is for docs.cleanlab.ai, if running on local Jupyter or Colab, please ignore it.
given_entities = ['O', 'B-MISC', 'I-MISC', 'B-PER', 'I-PER', 'B-ORG', 'I-ORG', 'B-LOC', 'I-LOC']
entities = ['O', 'MISC', 'PER', 'ORG', 'LOC']
entity_map = {entity: i for i, entity in enumerate(given_entities)}
def readfile(filepath, sep=' '):
lines = open(filepath)
data, sentence, label = [], [], []
for line in lines:
if len(line) == 0 or line.startswith('-DOCSTART') or line[0] == '\n':
if len(sentence) > 0:
data.append((sentence, label))
sentence, label = [], []
continue
splits = line.split(sep)
word = splits[0]
if len(word) > 0 and word[0].isalpha() and word.isupper():
word = word[0] + word[1:].lower()
sentence.append(word)
label.append(entity_map[splits[-1][:-1]])
if len(sentence) > 0:
data.append((sentence, label))
tokens = [d[0] for d in data]
given_labels = [d[1] for d in data]
return tokens, given_labels
[7]:
filepaths = ['data/train.txt', 'data/valid.txt', 'data/test.txt']
tokens, given_labels = [], []
for filepath in filepaths:
words, label = readfile(filepath)
tokens.extend(words)
given_labels.extend(label)
sentences = list(map(get_sentence, tokens))
sentences, mask = filter_sentence(sentences)
tokens = [words for m, words in zip(mask, tokens) if m]
given_labels = [labels for m, labels in zip(mask, given_labels) if m]
maps = [0, 1, 1, 2, 2, 3, 3, 4, 4]
labels = [mapping(labels, maps) for labels in given_labels]
To find label issues in token classification data, cleanlab requires labels
and pred_probs
, which should look as follows:
[8]:
indices_to_preview = 3 # increase this to view more examples
for i in range(indices_to_preview):
print('\nsentences[%d]:\t' % i + str(sentences[i]))
print('labels[%d]:\t' % i + str(labels[i]))
print('pred_probs[%d]:\n' % i + str(pred_probs[i]))
sentences[0]: Eu rejects German call to boycott British lamb.
labels[0]: [3, 0, 1, 0, 0, 0, 1, 0, 0]
pred_probs[0]:
[[0.00030412 0.00023826 0.99936208 0.00007009 0.00002545]
[0.99998795 0.00000401 0.00000218 0.00000455 0.00000131]
[0.00000749 0.99996115 0.00001371 0.0000087 0.00000895]
[0.99998936 0.00000382 0.00000178 0.00000366 0.00000137]
[0.99999101 0.00000266 0.00000174 0.0000035 0.00000109]
[0.99998768 0.00000482 0.00000202 0.00000438 0.0000011 ]
[0.00000465 0.99996392 0.00001105 0.0000116 0.00000878]
[0.99998671 0.00000364 0.00000213 0.00000472 0.00000281]
[0.99999073 0.00000211 0.00000159 0.00000442 0.00000115]]
sentences[1]: Peter Blackburn
labels[1]: [2, 2]
pred_probs[1]:
[[0.00000358 0.00000529 0.99995623 0.000022 0.0000129 ]
[0.0000024 0.00001812 0.99994141 0.00001645 0.00002162]]
sentences[2]: Brussels 1996-08-22
labels[2]: [4, 0]
pred_probs[2]:
[[0.00001172 0.00000821 0.00004661 0.0000618 0.99987167]
[0.99999061 0.00000201 0.00000195 0.00000408 0.00000135]]
Note that these correspond to the sentences in the dataset, where each sentence is treated as an individual training example (could be document instead of sentence). If using your own dataset, both pred_probs
and labels
should each be formatted as a nested-list where:
pred_probs
is a list whosei
-th element is a np.ndarray of shape(N_i, K)
corresponding to predicted class probabilities for each token in thei
-th sentence (assuming this sentence containsN_i
tokens and dataset hasK
possible classes). Each row of one np.ndarray corresponds to a tokent
and contains a model’s predicted probability thatt
belongs to each possible class, for each of the K classes. The columns must be ordered such that the probabilities correspond to class 0, 1, …, K-1. These should be out-of-samplepred_probs
obtained from a token classification model via cross-validation.labels
is a list whosei
-th element is a list of integers corresponding to class label of each token in thei
-th sentence. For dataset with K classes, labels must take values in 0, 1, …, K-1.
3. Use cleanlab to find label issues#
Based on the given labels and out-of-sample predicted probabilities, cleanlab can quickly help us identify label issues in our dataset. Here we request that the indices of the identified label issues be sorted by cleanlab’s self-confidence score, which measures the quality of each given label via the probability assigned to it in our model’s prediction. The returned issues
are a list of tuples (i, j)
, which corresponds to the j
th token of the i
-th sentence in the dataset. These
are the tokens cleanlab thinks may be badly labeled in your dataset.
[9]:
issues = find_label_issues(labels, pred_probs)
Let’s look at the top 20 tokens that cleanlab thinks are most likely mislabeled.
[10]:
top = 20 # increase this value to view more identified issues
print('Cleanlab found %d potential label issues. ' % len(issues))
print('The top %d most likely label errors:' % top)
print(issues[:top])
Cleanlab found 2254 potential label issues.
The top 20 most likely label errors:
[(2907, 0), (19392, 0), (9962, 4), (8904, 30), (19303, 0), (12918, 0), (9256, 0), (11855, 20), (18392, 4), (20426, 28), (19402, 21), (14744, 15), (19371, 0), (4645, 2), (83, 9), (10331, 3), (9430, 10), (6143, 25), (18367, 0), (12914, 3)]
We can better decide how to handle these issues by viewing the original sentences containing these tokens. Given that O
and MISC
classes (corresponding to integers 0 and 1 in our class ordering) can sometimes be ambiguous, they are excluded from our visualization below. This is achieved via the exclude
argument, a list of tuples (i, j)
such that tokens predicted as entities[j]
but labeled as entities[i]
are ignored.
[11]:
display_issues(issues, tokens, pred_probs=pred_probs, labels=labels,
exclude=[(0, 1), (1, 0)], class_names=entities)
Sentence index: 2907, Token index: 0
Token: Little
Given label: PER, predicted label according to provided pred_probs: O
----
Little change from today's weather expected.
Sentence index: 19392, Token index: 0
Token: Let
Given label: LOC, predicted label according to provided pred_probs: O
----
Let's march together," Scalfaro, a northerner himself, said.
Sentence index: 9962, Token index: 4
Token: germany
Given label: LOC, predicted label according to provided pred_probs: O
----
3. Nastja Rysich (germany) 3.75
Sentence index: 8904, Token index: 30
Token: north
Given label: LOC, predicted label according to provided pred_probs: O
----
The Spla has fought Khartoum's government forces in the south since 1983 for greater autonomy or independence of the mainly Christian and animist region from the Moslem, Arabised north.
Sentence index: 12918, Token index: 0
Token: Mayor
Given label: PER, predicted label according to provided pred_probs: O
----
Mayor Antonio Gonzalez Garcia, of the opposition Revolutionary Workers' Party, said in Wednesday's letter that army troops recently raided several local farms, stole cattle and raped women.
Sentence index: 9256, Token index: 0
Token: Spring
Given label: LOC, predicted label according to provided pred_probs: O
----
Spring Chg Hrw 12pct Chg White Chg
Sentence index: 11855, Token index: 20
Token: Prince
Given label: PER, predicted label according to provided pred_probs: O
----
" We have seen the photos but for the moment the palace has no comment," a spokeswoman for Prince Rainier told Reuters.
Sentence index: 18392, Token index: 4
Token: /
Given label: O, predicted label according to provided pred_probs: LOC
----
Danila 28.5 16/12 Caribs/ up W224 Mobil.
Sentence index: 19402, Token index: 21
Token: Wednesday
Given label: ORG, predicted label according to provided pred_probs: O
----
A Reuter consensus survey sees medical equipment group Radiometer reporting largely unchanged earnings when it publishes first half 19996/97 results next Wednesday.
Sentence index: 83, Token index: 9
Token: Us
Given label: LOC, predicted label according to provided pred_probs: O
----
Listing London Denoms (K) 1-10-100 Sale Limits Us/ Uk/ Jp/ Fr
Sentence index: 10331, Token index: 3
Token: Maccabi
Given label: O, predicted label according to provided pred_probs: ORG
----
Hapoel Haifa 3 Maccabi Tel Aviv 1
Sentence index: 9430, Token index: 10
Token: hospital
Given label: LOC, predicted label according to provided pred_probs: O
----
The revered Roman Catholic nun was admitted to the Calcutta hospital a week ago with high fever and severe vomiting.
Sentence index: 6143, Token index: 25
Token: alliance
Given label: ORG, predicted label according to provided pred_probs: O
----
The embattled Afghan government said last week that the Kabul-Salang highway would be opened on Monday or Tuesday following talks with the Supreme Coordination Council alliance led by Jumbish-i-Milli movement of powerful opposition warlord General Abdul Rashid Dostum.
Sentence index: 18367, Token index: 0
Token: Can
Given label: LOC, predicted label according to provided pred_probs: O
----
Can/ U.s. Dollar Exchange Rate: 1.3570
Sentence index: 12049, Token index: 0
Token: Born
Given label: LOC, predicted label according to provided pred_probs: O
----
Born in 1937 in the central province of Anhui, Dai came to Shanghai as a student and remained in the city as a prolific author and teacher of Chinese.
Sentence index: 16764, Token index: 7
Token: (
Given label: PER, predicted label according to provided pred_probs: O
----
1990 - British historian Alan John Percivale (A.j.p.) Taylor died.
Sentence index: 20446, Token index: 0
Token: Pace
Given label: PER, predicted label according to provided pred_probs: O
----
Pace bowler Ian Harvey claimed three for 81 for Victoria.
Sentence index: 15514, Token index: 16
Token: Cotti
Given label: O, predicted label according to provided pred_probs: PER
----
But one must not forget that the Osce only has limited powers there," said Cotti, who is also the Swiss foreign minister."
Sentence index: 7525, Token index: 12
Token: Sultan
Given label: PER, predicted label according to provided pred_probs: O
----
Specter met Crown Prince Abdullah and Minister of Defence and Aviation Prince Sultan in Jeddah, Saudi state television and the official Saudi Press Agency reported.
Sentence index: 2288, Token index: 0
Token: Sporting
Given label: ORG, predicted label according to provided pred_probs: O
----
Sporting his customary bright green outfit, the U.s. champion clocked 10.03 seconds despite damp conditions to take the scalp of Canada's reigning Olympic champion Donovan Bailey, 1992 champion Linford Christie of Britain and American 1984 and 1988 champion Carl Lewis.
More than half of the potential label issues correspond to tokens that are incorrectly labeled. As shown above, some examples are ambigious and may require more thoughful handling. cleanlab has also discovered some edge cases such as tokens which are simply punctuations such as /
and (
.
Most common word-level token mislabels#
We may also wish to understand which tokens tend to be most commonly mislabeled throughout the entire dataset:
[12]:
info = common_label_issues(issues, tokens,
labels=labels,
pred_probs=pred_probs,
class_names=entities,
exclude=[(0, 1), (1, 0)])
Token '/' is potentially mislabeled 42 times throughout the dataset
---------------------------------------------------------------------------------------
labeled as class `O` but predicted to actually be class `LOC` 36 times
labeled as class `O` but predicted to actually be class `PER` 4 times
labeled as class `O` but predicted to actually be class `ORG` 2 times
Token 'Chicago' is potentially mislabeled 27 times throughout the dataset
---------------------------------------------------------------------------------------
labeled as class `ORG` but predicted to actually be class `LOC` 22 times
labeled as class `LOC` but predicted to actually be class `ORG` 3 times
labeled as class `MISC` but predicted to actually be class `ORG` 2 times
Token 'U.s.' is potentially mislabeled 21 times throughout the dataset
---------------------------------------------------------------------------------------
labeled as class `LOC` but predicted to actually be class `ORG` 8 times
labeled as class `ORG` but predicted to actually be class `LOC` 6 times
labeled as class `LOC` but predicted to actually be class `O` 3 times
labeled as class `LOC` but predicted to actually be class `MISC` 2 times
labeled as class `MISC` but predicted to actually be class `LOC` 1 times
labeled as class `MISC` but predicted to actually be class `ORG` 1 times
Token 'Digest' is potentially mislabeled 20 times throughout the dataset
---------------------------------------------------------------------------------------
labeled as class `O` but predicted to actually be class `ORG` 20 times
Token 'Press' is potentially mislabeled 20 times throughout the dataset
---------------------------------------------------------------------------------------
labeled as class `O` but predicted to actually be class `ORG` 20 times
Token 'New' is potentially mislabeled 17 times throughout the dataset
---------------------------------------------------------------------------------------
labeled as class `ORG` but predicted to actually be class `LOC` 13 times
labeled as class `LOC` but predicted to actually be class `ORG` 2 times
labeled as class `O` but predicted to actually be class `ORG` 1 times
labeled as class `MISC` but predicted to actually be class `LOC` 1 times
Token 'and' is potentially mislabeled 16 times throughout the dataset
---------------------------------------------------------------------------------------
labeled as class `ORG` but predicted to actually be class `O` 7 times
labeled as class `O` but predicted to actually be class `ORG` 5 times
labeled as class `O` but predicted to actually be class `LOC` 3 times
labeled as class `MISC` but predicted to actually be class `ORG` 1 times
Token 'Philadelphia' is potentially mislabeled 15 times throughout the dataset
---------------------------------------------------------------------------------------
labeled as class `ORG` but predicted to actually be class `LOC` 14 times
labeled as class `LOC` but predicted to actually be class `ORG` 1 times
Token 'Usda' is potentially mislabeled 13 times throughout the dataset
---------------------------------------------------------------------------------------
labeled as class `ORG` but predicted to actually be class `LOC` 7 times
labeled as class `ORG` but predicted to actually be class `PER` 5 times
labeled as class `ORG` but predicted to actually be class `MISC` 1 times
Token 'York' is potentially mislabeled 12 times throughout the dataset
---------------------------------------------------------------------------------------
labeled as class `ORG` but predicted to actually be class `LOC` 11 times
labeled as class `LOC` but predicted to actually be class `ORG` 1 times
The printed information above is also stored in pd.DataFrame info
.
Find sentences containing a particular mislabeled word#
You can also only focus on the subset of potentially problematic sentences where a particular token may have been mislabeled.
[13]:
token_issues = filter_by_token('United', issues, tokens)
display_issues(token_issues, tokens, pred_probs=pred_probs, labels=labels,
exclude=[(0, 1), (1, 0)], class_names=entities)
Sentence index: 471, Token index: 8
Token: United
Given label: LOC, predicted label according to provided pred_probs: ORG
----
Soccer - Keane Signs Four-year Contract With Manchester United.
Sentence index: 19072, Token index: 5
Token: United
Given label: LOC, predicted label according to provided pred_probs: ORG
----
The Humane Society of the United States estimates that between 500,000 and one million bites are delivered by dogs each year, more than half of which are suffered by children.
Sentence index: 19910, Token index: 5
Token: United
Given label: LOC, predicted label according to provided pred_probs: ORG
----
His father Clarence Woolmer represented United Province, now renamed Uttar Pradesh, in India's Ranji Trophy national championship and captained the state during 1949.
Sentence index: 15658, Token index: 0
Token: United
Given label: ORG, predicted label according to provided pred_probs: LOC
----
United Nations 1996-08-29
Sentence index: 19879, Token index: 1
Token: United
Given label: ORG, predicted label according to provided pred_probs: LOC
----
1. United States Iii (Brian Shimer, Randy Jones) one
Sentence index: 19104, Token index: 0
Token: United
Given label: ORG, predicted label according to provided pred_probs: LOC
----
United Nations 1996-12-06
Sentence label quality score#
For best reviewing label issues in a token classification dataset, you want to look at sentences one at a time. Here sentences more likely to contain a label error should be ranked earlier. Cleanlab can provide an overall label quality score for each sentence (ranging from 0 to 1) such that lower scores indicate sentences more likely to contain some mislabeled token. We can also obtain label quality scores for each individual token and manually decide which of these are label issues by
thresholding them. For automatically estimating which tokens are mislabeled (and the number of label errors), you should use find_label_issues()
instead. get_label_quality_scores()
is useful if you only have time to review a few sentences and want to prioritize which, or if you’re specifically aiming to detect label errors with high precision (or high recall) rather than overall estimation of the set of mislabeled tokens.
[14]:
sentence_scores, token_scores = get_label_quality_scores(labels, pred_probs)
issues = issues_from_scores(sentence_scores, token_scores=token_scores)
display_issues(issues, tokens, pred_probs=pred_probs, labels=labels,
exclude=[(0, 1), (1, 0)], class_names=entities)
Sentence index: 2907, Token index: 0
Token: Little
Given label: PER, predicted label according to provided pred_probs: O
----
Little change from today's weather expected.
Sentence index: 19392, Token index: 0
Token: Let
Given label: LOC, predicted label according to provided pred_probs: O
----
Let's march together," Scalfaro, a northerner himself, said.
Sentence index: 9962, Token index: 4
Token: germany
Given label: LOC, predicted label according to provided pred_probs: O
----
3. Nastja Rysich (germany) 3.75
Sentence index: 8904, Token index: 30
Token: north
Given label: LOC, predicted label according to provided pred_probs: O
----
The Spla has fought Khartoum's government forces in the south since 1983 for greater autonomy or independence of the mainly Christian and animist region from the Moslem, Arabised north.
Sentence index: 12918, Token index: 0
Token: Mayor
Given label: PER, predicted label according to provided pred_probs: O
----
Mayor Antonio Gonzalez Garcia, of the opposition Revolutionary Workers' Party, said in Wednesday's letter that army troops recently raided several local farms, stole cattle and raped women.
Sentence index: 9256, Token index: 0
Token: Spring
Given label: LOC, predicted label according to provided pred_probs: O
----
Spring Chg Hrw 12pct Chg White Chg
Sentence index: 11855, Token index: 20
Token: Prince
Given label: PER, predicted label according to provided pred_probs: O
----
" We have seen the photos but for the moment the palace has no comment," a spokeswoman for Prince Rainier told Reuters.
Sentence index: 18392, Token index: 4
Token: /
Given label: O, predicted label according to provided pred_probs: LOC
----
Danila 28.5 16/12 Caribs/ up W224 Mobil.
Sentence index: 19402, Token index: 21
Token: Wednesday
Given label: ORG, predicted label according to provided pred_probs: O
----
A Reuter consensus survey sees medical equipment group Radiometer reporting largely unchanged earnings when it publishes first half 19996/97 results next Wednesday.
Sentence index: 83, Token index: 9
Token: Us
Given label: LOC, predicted label according to provided pred_probs: O
----
Listing London Denoms (K) 1-10-100 Sale Limits Us/ Uk/ Jp/ Fr
Sentence index: 10331, Token index: 3
Token: Maccabi
Given label: O, predicted label according to provided pred_probs: ORG
----
Hapoel Haifa 3 Maccabi Tel Aviv 1
Sentence index: 9430, Token index: 10
Token: hospital
Given label: LOC, predicted label according to provided pred_probs: O
----
The revered Roman Catholic nun was admitted to the Calcutta hospital a week ago with high fever and severe vomiting.
Sentence index: 6143, Token index: 25
Token: alliance
Given label: ORG, predicted label according to provided pred_probs: O
----
The embattled Afghan government said last week that the Kabul-Salang highway would be opened on Monday or Tuesday following talks with the Supreme Coordination Council alliance led by Jumbish-i-Milli movement of powerful opposition warlord General Abdul Rashid Dostum.
Sentence index: 18367, Token index: 0
Token: Can
Given label: LOC, predicted label according to provided pred_probs: O
----
Can/ U.s. Dollar Exchange Rate: 1.3570
Sentence index: 12049, Token index: 0
Token: Born
Given label: LOC, predicted label according to provided pred_probs: O
----
Born in 1937 in the central province of Anhui, Dai came to Shanghai as a student and remained in the city as a prolific author and teacher of Chinese.
Sentence index: 16764, Token index: 7
Token: (
Given label: PER, predicted label according to provided pred_probs: O
----
1990 - British historian Alan John Percivale (A.j.p.) Taylor died.
Sentence index: 20446, Token index: 0
Token: Pace
Given label: PER, predicted label according to provided pred_probs: O
----
Pace bowler Ian Harvey claimed three for 81 for Victoria.
Sentence index: 15514, Token index: 16
Token: Cotti
Given label: O, predicted label according to provided pred_probs: PER
----
But one must not forget that the Osce only has limited powers there," said Cotti, who is also the Swiss foreign minister."
Sentence index: 7525, Token index: 12
Token: Sultan
Given label: PER, predicted label according to provided pred_probs: O
----
Specter met Crown Prince Abdullah and Minister of Defence and Aviation Prince Sultan in Jeddah, Saudi state television and the official Saudi Press Agency reported.
Sentence index: 2288, Token index: 0
Token: Sporting
Given label: ORG, predicted label according to provided pred_probs: O
----
Sporting his customary bright green outfit, the U.s. champion clocked 10.03 seconds despite damp conditions to take the scalp of Canada's reigning Olympic champion Donovan Bailey, 1992 champion Linford Christie of Britain and American 1984 and 1988 champion Carl Lewis.
How does cleanlab.token_classification work?#
The underlying algorithms used to produce these scores are described in this paper.