cifar_cnn#
A PyTorch CNN which can be used for finding label issues in CIFAR-10 and CleanLearning with co-teaching.
Code adapted from: https://github.com/bhanML/Co-teaching/blob/master/model.py
You must have PyTorch installed: https://pytorch.org/get-started/locally/
Classes:
|
A CNN architecture shown to be a good baseline for a CIFAR-10 benchmark. |
Functions:
|
- class cleanlab.experimental.cifar_cnn.CNN(input_channel=3, n_outputs=10, dropout_rate=0.25, top_bn=False)[source]#
Bases:
Module
A CNN architecture shown to be a good baseline for a CIFAR-10 benchmark.
- Parameters:
input_channel (
int
) –n_outputs (
int
) –dropout_rate (
float
) –top_bn (
bool
) –
Attributes:
alias of TypeVar('T_destination', bound=
Mapping
[str
,Tensor
])This allows better BC support for
load_state_dict()
.Methods:
__call__
(*input, **kwargs)Call self as a function.
add_module
(name, module)Adds a child module to the current module.
apply
(fn)Applies
fn
recursively to every submodule (as returned by.children()
) as well as self.bfloat16
()Casts all floating point parameters and buffers to
bfloat16
datatype.buffers
([recurse])Returns an iterator over module buffers.
children
()Returns an iterator over immediate children modules.
cpu
()Moves all model parameters and buffers to the CPU.
cuda
([device])Moves all model parameters and buffers to the GPU.
double
()Casts all floating point parameters and buffers to
double
datatype.eval
()Sets the module in evaluation mode.
Set the extra representation of the module
float
()Casts all floating point parameters and buffers to
float
datatype.forward
(x)Defines the computation performed at every call.
get_buffer
(target)Returns the buffer given by
target
if it exists, otherwise throws an error.Returns any extra state to include in the module's state_dict.
get_parameter
(target)Returns the parameter given by
target
if it exists, otherwise throws an error.get_submodule
(target)Returns the submodule given by
target
if it exists, otherwise throws an error.half
()Casts all floating point parameters and buffers to
half
datatype.load_state_dict
(state_dict[, strict])Copies parameters and buffers from
state_dict
into this module and its descendants.modules
()Returns an iterator over all modules in the network.
named_buffers
([prefix, recurse])Returns an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.
Returns an iterator over immediate children modules, yielding both the name of the module as well as the module itself.
named_modules
([memo, prefix, remove_duplicate])Returns an iterator over all modules in the network, yielding both the name of the module as well as the module itself.
named_parameters
([prefix, recurse])Returns an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.
parameters
([recurse])Returns an iterator over module parameters.
register_backward_hook
(hook)Registers a backward hook on the module.
register_buffer
(name, tensor[, persistent])Adds a buffer to the module.
register_forward_hook
(hook)Registers a forward hook on the module.
Registers a forward pre-hook on the module.
Registers a backward hook on the module.
register_module
(name, module)Alias for
add_module()
.register_parameter
(name, param)Adds a parameter to the module.
requires_grad_
([requires_grad])Change if autograd should record operations on parameters in this module.
set_extra_state
(state)This function is called from
load_state_dict()
to handle any extra state found within the state_dict.See
torch.Tensor.share_memory_()
state_dict
([destination, prefix, keep_vars])Returns a dictionary containing a whole state of the module.
to
(*args, **kwargs)Moves and/or casts the parameters and buffers.
to_empty
(*, device)Moves the parameters and buffers to the specified device without copying storage.
train
([mode])Sets the module in training mode.
type
(dst_type)Casts all parameters and buffers to
dst_type
.xpu
([device])Moves all model parameters and buffers to the XPU.
zero_grad
([set_to_none])Sets gradients of all model parameters to zero.
- T_destination#
alias of TypeVar(‘T_destination’, bound=
Mapping
[str
,Tensor
])
- __call__(*input, **kwargs)#
Call self as a function.
- add_module(name, module)#
Adds a child module to the current module.
The module can be accessed as an attribute using the given name.
- Args:
- name (string): name of the child module. The child module can be
accessed from this module using the given name
module (Module): child module to be added to the module.
- Return type:
None
- apply(fn)#
Applies
fn
recursively to every submodule (as returned by.children()
) as well as self. Typical use includes initializing the parameters of a model (see also nn-init-doc).- Args:
fn (
Module
-> None): function to be applied to each submodule- Returns:
Module: self
Example:
>>> @torch.no_grad() >>> def init_weights(m): >>> print(m) >>> if type(m) == nn.Linear: >>> m.weight.fill_(1.0) >>> print(m.weight) >>> net = nn.Sequential(nn.Linear(2, 2), nn.Linear(2, 2)) >>> net.apply(init_weights) Linear(in_features=2, out_features=2, bias=True) Parameter containing: tensor([[ 1., 1.], [ 1., 1.]]) Linear(in_features=2, out_features=2, bias=True) Parameter containing: tensor([[ 1., 1.], [ 1., 1.]]) Sequential( (0): Linear(in_features=2, out_features=2, bias=True) (1): Linear(in_features=2, out_features=2, bias=True) ) Sequential( (0): Linear(in_features=2, out_features=2, bias=True) (1): Linear(in_features=2, out_features=2, bias=True) )
- Return type:
TypeVar
(T
, bound= Module)
- bfloat16()#
Casts all floating point parameters and buffers to
bfloat16
datatype.Note
This method modifies the module in-place.
- Returns:
Module: self
- Return type:
TypeVar
(T
, bound= Module)
- buffers(recurse=True)#
Returns an iterator over module buffers.
- Args:
- recurse (bool): if True, then yields buffers of this module
and all submodules. Otherwise, yields only buffers that are direct members of this module.
- Yields:
torch.Tensor: module buffer
Example:
>>> for buf in model.buffers(): >>> print(type(buf), buf.size()) <class 'torch.Tensor'> (20L,) <class 'torch.Tensor'> (20L, 1L, 5L, 5L)
- Return type:
Iterator
[Tensor
]
- children()#
Returns an iterator over immediate children modules.
- Yields:
Module: a child module
- Return type:
Iterator
[Module
]
- cpu()#
Moves all model parameters and buffers to the CPU.
Note
This method modifies the module in-place.
- Returns:
Module: self
- Return type:
TypeVar
(T
, bound= Module)
- cuda(device=None)#
Moves all model parameters and buffers to the GPU.
This also makes associated parameters and buffers different objects. So it should be called before constructing optimizer if the module will live on GPU while being optimized.
Note
This method modifies the module in-place.
- Args:
- device (int, optional): if specified, all parameters will be
copied to that device
- Returns:
Module: self
- Return type:
TypeVar
(T
, bound= Module)
- double()#
Casts all floating point parameters and buffers to
double
datatype.Note
This method modifies the module in-place.
- Returns:
Module: self
- Return type:
TypeVar
(T
, bound= Module)
- dump_patches: bool = False#
This allows better BC support for
load_state_dict()
. Instate_dict()
, the version number will be saved as in the attribute _metadata of the returned state dict, and thus pickled. _metadata is a dictionary with keys that follow the naming convention of state dict. See_load_from_state_dict
on how to use this information in loading.If new parameters/buffers are added/removed from a module, this number shall be bumped, and the module’s _load_from_state_dict method can compare the version number and do appropriate changes if the state dict is from before the change.
- eval()#
Sets the module in evaluation mode.
This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g.
Dropout
,BatchNorm
, etc.This is equivalent with
self.train(False)
.See locally-disable-grad-doc for a comparison between .eval() and several similar mechanisms that may be confused with it.
- Returns:
Module: self
- Return type:
TypeVar
(T
, bound= Module)
- extra_repr()#
Set the extra representation of the module
To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.
- Return type:
str
- float()#
Casts all floating point parameters and buffers to
float
datatype.Note
This method modifies the module in-place.
- Returns:
Module: self
- Return type:
TypeVar
(T
, bound= Module)
- forward(x)[source]#
Defines the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.
- get_buffer(target)#
Returns the buffer given by
target
if it exists, otherwise throws an error.See the docstring for
get_submodule
for a more detailed explanation of this method’s functionality as well as how to correctly specifytarget
.- Args:
- target: The fully-qualified string name of the buffer
to look for. (See
get_submodule
for how to specify a fully-qualified string.)
- Returns:
torch.Tensor: The buffer referenced by
target
- Raises:
- AttributeError: If the target string references an invalid
path or resolves to something that is not a buffer
- Return type:
Tensor
- get_extra_state()#
Returns any extra state to include in the module’s state_dict. Implement this and a corresponding
set_extra_state()
for your module if you need to store extra state. This function is called when building the module’s state_dict().Note that extra state should be pickleable to ensure working serialization of the state_dict. We only provide provide backwards compatibility guarantees for serializing Tensors; other objects may break backwards compatibility if their serialized pickled form changes.
- Returns:
object: Any extra state to store in the module’s state_dict
- Return type:
Any
- get_parameter(target)#
Returns the parameter given by
target
if it exists, otherwise throws an error.See the docstring for
get_submodule
for a more detailed explanation of this method’s functionality as well as how to correctly specifytarget
.- Args:
- target: The fully-qualified string name of the Parameter
to look for. (See
get_submodule
for how to specify a fully-qualified string.)
- Returns:
torch.nn.Parameter: The Parameter referenced by
target
- Raises:
- AttributeError: If the target string references an invalid
path or resolves to something that is not an
nn.Parameter
- Return type:
Parameter
- get_submodule(target)#
Returns the submodule given by
target
if it exists, otherwise throws an error.For example, let’s say you have an
nn.Module
A
that looks like this:(The diagram shows an
nn.Module
A
.A
has a nested submodulenet_b
, which itself has two submodulesnet_c
andlinear
.net_c
then has a submoduleconv
.)To check whether or not we have the
linear
submodule, we would callget_submodule("net_b.linear")
. To check whether we have theconv
submodule, we would callget_submodule("net_b.net_c.conv")
.The runtime of
get_submodule
is bounded by the degree of module nesting intarget
. A query againstnamed_modules
achieves the same result, but it is O(N) in the number of transitive modules. So, for a simple check to see if some submodule exists,get_submodule
should always be used.- Args:
- target: The fully-qualified string name of the submodule
to look for. (See above example for how to specify a fully-qualified string.)
- Returns:
torch.nn.Module: The submodule referenced by
target
- Raises:
- AttributeError: If the target string references an invalid
path or resolves to something that is not an
nn.Module
- Return type:
Module
- half()#
Casts all floating point parameters and buffers to
half
datatype.Note
This method modifies the module in-place.
- Returns:
Module: self
- Return type:
TypeVar
(T
, bound= Module)
- load_state_dict(state_dict, strict=True)#
Copies parameters and buffers from
state_dict
into this module and its descendants. Ifstrict
isTrue
, then the keys ofstate_dict
must exactly match the keys returned by this module’sstate_dict()
function.- Args:
- state_dict (dict): a dict containing parameters and
persistent buffers.
- strict (bool, optional): whether to strictly enforce that the keys
in
state_dict
match the keys returned by this module’sstate_dict()
function. Default:True
- Returns:
NamedTuple
withmissing_keys
andunexpected_keys
fields:missing_keys is a list of str containing the missing keys
unexpected_keys is a list of str containing the unexpected keys
- Note:
If a parameter or buffer is registered as
None
and its corresponding key exists instate_dict
,load_state_dict()
will raise aRuntimeError
.
- modules()#
Returns an iterator over all modules in the network.
- Yields:
Module: a module in the network
- Note:
Duplicate modules are returned only once. In the following example,
l
will be returned only once.
Example:
>>> l = nn.Linear(2, 2) >>> net = nn.Sequential(l, l) >>> for idx, m in enumerate(net.modules()): print(idx, '->', m) 0 -> Sequential( (0): Linear(in_features=2, out_features=2, bias=True) (1): Linear(in_features=2, out_features=2, bias=True) ) 1 -> Linear(in_features=2, out_features=2, bias=True)
- Return type:
Iterator
[Module
]
- named_buffers(prefix='', recurse=True)#
Returns an iterator over module buffers, yielding both the name of the buffer as well as the buffer itself.
- Args:
prefix (str): prefix to prepend to all buffer names. recurse (bool): if True, then yields buffers of this module
and all submodules. Otherwise, yields only buffers that are direct members of this module.
- Yields:
(string, torch.Tensor): Tuple containing the name and buffer
Example:
>>> for name, buf in self.named_buffers(): >>> if name in ['running_var']: >>> print(buf.size())
- Return type:
Iterator
[Tuple
[str
,Tensor
]]
- named_children()#
Returns an iterator over immediate children modules, yielding both the name of the module as well as the module itself.
- Yields:
(string, Module): Tuple containing a name and child module
Example:
>>> for name, module in model.named_children(): >>> if name in ['conv4', 'conv5']: >>> print(module)
- Return type:
Iterator
[Tuple
[str
,Module
]]
- named_modules(memo=None, prefix='', remove_duplicate=True)#
Returns an iterator over all modules in the network, yielding both the name of the module as well as the module itself.
- Args:
memo: a memo to store the set of modules already added to the result prefix: a prefix that will be added to the name of the module remove_duplicate: whether to remove the duplicated module instances in the result
or not
- Yields:
(string, Module): Tuple of name and module
- Note:
Duplicate modules are returned only once. In the following example,
l
will be returned only once.
Example:
>>> l = nn.Linear(2, 2) >>> net = nn.Sequential(l, l) >>> for idx, m in enumerate(net.named_modules()): print(idx, '->', m) 0 -> ('', Sequential( (0): Linear(in_features=2, out_features=2, bias=True) (1): Linear(in_features=2, out_features=2, bias=True) )) 1 -> ('0', Linear(in_features=2, out_features=2, bias=True))
- named_parameters(prefix='', recurse=True)#
Returns an iterator over module parameters, yielding both the name of the parameter as well as the parameter itself.
- Args:
prefix (str): prefix to prepend to all parameter names. recurse (bool): if True, then yields parameters of this module
and all submodules. Otherwise, yields only parameters that are direct members of this module.
- Yields:
(string, Parameter): Tuple containing the name and parameter
Example:
>>> for name, param in self.named_parameters(): >>> if name in ['bias']: >>> print(param.size())
- Return type:
Iterator
[Tuple
[str
,Parameter
]]
- parameters(recurse=True)#
Returns an iterator over module parameters.
This is typically passed to an optimizer.
- Args:
- recurse (bool): if True, then yields parameters of this module
and all submodules. Otherwise, yields only parameters that are direct members of this module.
- Yields:
Parameter: module parameter
Example:
>>> for param in model.parameters(): >>> print(type(param), param.size()) <class 'torch.Tensor'> (20L,) <class 'torch.Tensor'> (20L, 1L, 5L, 5L)
- Return type:
Iterator
[Parameter
]
- register_backward_hook(hook)#
Registers a backward hook on the module.
This function is deprecated in favor of
register_full_backward_hook()
and the behavior of this function will change in future versions.- Returns:
torch.utils.hooks.RemovableHandle
:a handle that can be used to remove the added hook by calling
handle.remove()
- Return type:
RemovableHandle
- register_buffer(name, tensor, persistent=True)#
Adds a buffer to the module.
This is typically used to register a buffer that should not to be considered a model parameter. For example, BatchNorm’s
running_mean
is not a parameter, but is part of the module’s state. Buffers, by default, are persistent and will be saved alongside parameters. This behavior can be changed by settingpersistent
toFalse
. The only difference between a persistent buffer and a non-persistent buffer is that the latter will not be a part of this module’sstate_dict
.Buffers can be accessed as attributes using given names.
- Args:
- name (string): name of the buffer. The buffer can be accessed
from this module using the given name
- tensor (Tensor or None): buffer to be registered. If
None
, then operations that run on buffers, such as
cuda
, are ignored. IfNone
, the buffer is not included in the module’sstate_dict
.- persistent (bool): whether the buffer is part of this module’s
Example:
>>> self.register_buffer('running_mean', torch.zeros(num_features))
- Return type:
None
- register_forward_hook(hook)#
Registers a forward hook on the module.
The hook will be called every time after
forward()
has computed an output. It should have the following signature:hook(module, input, output) -> None or modified output
The input contains only the positional arguments given to the module. Keyword arguments won’t be passed to the hooks and only to the
forward
. The hook can modify the output. It can modify the input inplace but it will not have effect on forward since this is called afterforward()
is called.- Returns:
torch.utils.hooks.RemovableHandle
:a handle that can be used to remove the added hook by calling
handle.remove()
- Return type:
RemovableHandle
- register_forward_pre_hook(hook)#
Registers a forward pre-hook on the module.
The hook will be called every time before
forward()
is invoked. It should have the following signature:hook(module, input) -> None or modified input
The input contains only the positional arguments given to the module. Keyword arguments won’t be passed to the hooks and only to the
forward
. The hook can modify the input. User can either return a tuple or a single modified value in the hook. We will wrap the value into a tuple if a single value is returned(unless that value is already a tuple).- Returns:
torch.utils.hooks.RemovableHandle
:a handle that can be used to remove the added hook by calling
handle.remove()
- Return type:
RemovableHandle
- register_full_backward_hook(hook)#
Registers a backward hook on the module.
The hook will be called every time the gradients with respect to module inputs are computed. The hook should have the following signature:
hook(module, grad_input, grad_output) -> tuple(Tensor) or None
The
grad_input
andgrad_output
are tuples that contain the gradients with respect to the inputs and outputs respectively. The hook should not modify its arguments, but it can optionally return a new gradient with respect to the input that will be used in place ofgrad_input
in subsequent computations.grad_input
will only correspond to the inputs given as positional arguments and all kwarg arguments are ignored. Entries ingrad_input
andgrad_output
will beNone
for all non-Tensor arguments.For technical reasons, when this hook is applied to a Module, its forward function will receive a view of each Tensor passed to the Module. Similarly the caller will receive a view of each Tensor returned by the Module’s forward function.
Warning
Modifying inputs or outputs inplace is not allowed when using backward hooks and will raise an error.
- Returns:
torch.utils.hooks.RemovableHandle
:a handle that can be used to remove the added hook by calling
handle.remove()
- Return type:
RemovableHandle
- register_module(name, module)#
Alias for
add_module()
.- Return type:
None
- register_parameter(name, param)#
Adds a parameter to the module.
The parameter can be accessed as an attribute using given name.
- Args:
- name (string): name of the parameter. The parameter can be accessed
from this module using the given name
- param (Parameter or None): parameter to be added to the module. If
None
, then operations that run on parameters, such ascuda
, are ignored. IfNone
, the parameter is not included in the module’sstate_dict
.
- Return type:
None
- requires_grad_(requires_grad=True)#
Change if autograd should record operations on parameters in this module.
This method sets the parameters’
requires_grad
attributes in-place.This method is helpful for freezing part of the module for finetuning or training parts of a model individually (e.g., GAN training).
See locally-disable-grad-doc for a comparison between .requires_grad_() and several similar mechanisms that may be confused with it.
- Args:
- requires_grad (bool): whether autograd should record operations on
parameters in this module. Default:
True
.
- Returns:
Module: self
- Return type:
TypeVar
(T
, bound= Module)
- set_extra_state(state)#
This function is called from
load_state_dict()
to handle any extra state found within the state_dict. Implement this function and a correspondingget_extra_state()
for your module if you need to store extra state within its state_dict.- Args:
state (dict): Extra state from the state_dict
See
torch.Tensor.share_memory_()
- Return type:
TypeVar
(T
, bound= Module)
- state_dict(destination=None, prefix='', keep_vars=False)#
Returns a dictionary containing a whole state of the module.
Both parameters and persistent buffers (e.g. running averages) are included. Keys are corresponding parameter and buffer names. Parameters and buffers set to
None
are not included.- Returns:
- dict:
a dictionary containing a whole state of the module
Example:
>>> module.state_dict().keys() ['bias', 'weight']
- to(*args, **kwargs)#
Moves and/or casts the parameters and buffers.
This can be called as
- to(device=None, dtype=None, non_blocking=False)
- to(dtype, non_blocking=False)
- to(tensor, non_blocking=False)
- to(memory_format=torch.channels_last)
Its signature is similar to
torch.Tensor.to()
, but only accepts floating point or complexdtype
s. In addition, this method will only cast the floating point or complex parameters and buffers todtype
(if given). The integral parameters and buffers will be moveddevice
, if that is given, but with dtypes unchanged. Whennon_blocking
is set, it tries to convert/move asynchronously with respect to the host if possible, e.g., moving CPU Tensors with pinned memory to CUDA devices.See below for examples.
Note
This method modifies the module in-place.
- Args:
- device (
torch.device
): the desired device of the parameters and buffers in this module
- dtype (
torch.dtype
): the desired floating point or complex dtype of the parameters and buffers in this module
- tensor (torch.Tensor): Tensor whose dtype and device are the desired
dtype and device for all parameters and buffers in this module
- memory_format (
torch.memory_format
): the desired memory format for 4D parameters and buffers in this module (keyword only argument)
- device (
- Returns:
Module: self
Examples:
>>> linear = nn.Linear(2, 2) >>> linear.weight Parameter containing: tensor([[ 0.1913, -0.3420], [-0.5113, -0.2325]]) >>> linear.to(torch.double) Linear(in_features=2, out_features=2, bias=True) >>> linear.weight Parameter containing: tensor([[ 0.1913, -0.3420], [-0.5113, -0.2325]], dtype=torch.float64) >>> gpu1 = torch.device("cuda:1") >>> linear.to(gpu1, dtype=torch.half, non_blocking=True) Linear(in_features=2, out_features=2, bias=True) >>> linear.weight Parameter containing: tensor([[ 0.1914, -0.3420], [-0.5112, -0.2324]], dtype=torch.float16, device='cuda:1') >>> cpu = torch.device("cpu") >>> linear.to(cpu) Linear(in_features=2, out_features=2, bias=True) >>> linear.weight Parameter containing: tensor([[ 0.1914, -0.3420], [-0.5112, -0.2324]], dtype=torch.float16) >>> linear = nn.Linear(2, 2, bias=None).to(torch.cdouble) >>> linear.weight Parameter containing: tensor([[ 0.3741+0.j, 0.2382+0.j], [ 0.5593+0.j, -0.4443+0.j]], dtype=torch.complex128) >>> linear(torch.ones(3, 2, dtype=torch.cdouble)) tensor([[0.6122+0.j, 0.1150+0.j], [0.6122+0.j, 0.1150+0.j], [0.6122+0.j, 0.1150+0.j]], dtype=torch.complex128)
- to_empty(*, device)#
Moves the parameters and buffers to the specified device without copying storage.
- Args:
- device (
torch.device
): The desired device of the parameters and buffers in this module.
- device (
- Returns:
Module: self
- Return type:
TypeVar
(T
, bound= Module)
- train(mode=True)#
Sets the module in training mode.
This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g.
Dropout
,BatchNorm
, etc.- Args:
- mode (bool): whether to set training mode (
True
) or evaluation mode (
False
). Default:True
.
- mode (bool): whether to set training mode (
- Returns:
Module: self
- Return type:
TypeVar
(T
, bound= Module)
- training: bool#
- type(dst_type)#
Casts all parameters and buffers to
dst_type
.Note
This method modifies the module in-place.
- Args:
dst_type (type or string): the desired type
- Returns:
Module: self
- Return type:
TypeVar
(T
, bound= Module)
- xpu(device=None)#
Moves all model parameters and buffers to the XPU.
This also makes associated parameters and buffers different objects. So it should be called before constructing optimizer if the module will live on XPU while being optimized.
Note
This method modifies the module in-place.
- Arguments:
- device (int, optional): if specified, all parameters will be
copied to that device
- Returns:
Module: self
- Return type:
TypeVar
(T
, bound= Module)
- zero_grad(set_to_none=False)#
Sets gradients of all model parameters to zero. See similar function under
torch.optim.Optimizer
for more context.- Args:
- set_to_none (bool): instead of setting to zero, set the grads to None.
See
torch.optim.Optimizer.zero_grad()
for details.
- Return type:
None