attention_unet
AttentionBlock
¶
Bases: Module
An Attention block.
This class is intentionally not @traceable (models and layers are handled by a different process).
Parameters:
Name | Type | Description | Default |
---|---|---|---|
in_channels |
int
|
How many channels enter the attention block. |
required |
out_channels |
int
|
How many channels leave the attention block. |
required |
Source code in fastestimator/fastestimator/architecture/pytorch/attention_unet.py
AttentionUNet
¶
Bases: Module
Attention based UNet implementation in PyTorch.
This class is intentionally not @traceable (models and layers are handled by a different process).
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_size |
Tuple[int, int, int]
|
The size of the input tensor (channels, height, width). |
(1, 128, 128)
|
Raises:
Type | Description |
---|---|
ValueError
|
Length of |
ValueError
|
|
Source code in fastestimator/fastestimator/architecture/pytorch/attention_unet.py
UNetDecoderBlock
¶
Bases: Module
A UNet decoder block.
This class is intentionally not @traceable (models and layers are handled by a different process).
Parameters:
Name | Type | Description | Default |
---|---|---|---|
in_channels |
int
|
How many channels enter the decoder. |
required |
mid_channels |
int
|
How many channels are used for the decoder's intermediate layer. |
required |
out_channels |
int
|
How many channels leave the decoder. |
required |
Source code in fastestimator/fastestimator/architecture/pytorch/attention_unet.py
UNetEncoderBlock
¶
Bases: Module
A UNet encoder block.
This class is intentionally not @traceable (models and layers are handled by a different process).
Parameters:
Name | Type | Description | Default |
---|---|---|---|
in_channels |
int
|
How many channels enter the encoder. |
required |
out_channels |
int
|
How many channels leave the encoder. |
required |