_focal_loss
focal_loss
¶
Calculate the focal loss between two tensors.
Original implementation from https://github.com/facebookresearch/fvcore/blob/master/fvcore/nn/focal_loss.py . Loss used in RetinaNet for dense detection: https://arxiv.org/abs/1708.02002.
This method can be used with TensorFlow tensors:
true = tf.constant([[1], [1], [1], [0], [0], [0]])
pred = tf.constant([[0.97], [0.91], [0.73], [0.27], [0.09], [0.03]])
b = fe.backend.focal_loss(y_pred=pred, y_true=true, gamma=None, alpha=None) #0.1464
b = fe.backend.focal_loss(y_pred=pred, y_true=true, gamma=2.0, alpha=0.25) #0.00395
This method can be used with PyTorch tensors:
true = torch.tensor([[1], [1], [1], [0], [0], [0]])
pred = torch.tensor([[0.97], [0.91], [0.73], [0.27], [0.09], [0.03]])
b = fe.backend.focal_loss(y_pred=pred, y_true=true, gamma=None, alpha=None) #0.1464
b = fe.backend.focal_loss(y_pred=pred, y_true=true, gamma=2.0, alpha=0.25) #0.004
Parameters:
Name | Type | Description | Default |
---|---|---|---|
y_true
|
Tensor
|
Ground truth class labels with shape([batch_size, d0, .. dN]), which should take values of 1 or 0. |
required |
y_pred
|
Tensor
|
Prediction score for each class, with a shape like y_true. dtype: float32 or float16. |
required |
alpha
|
float
|
Weighting factor in range (0,1) to balance positive vs negative examples or (-1/None) to ignore. Default = 0.25 |
0.25
|
gamma
|
float
|
Exponent of the modulating factor (1 - p_t) to balance easy vs hard examples. |
2.0
|
normalize
|
bool
|
Whether to normalize focal loss along samples based on number of positive classes per samples. |
True
|
shape_reduction
|
str
|
|
'sum'
|
from_logits
|
bool
|
Whether y_pred is logits (without sigmoid). |
False
|
sample_reduction
|
str
|
'none' | 'mean' | 'sum' 'none': No reduction will be applied to the output. 'mean': The output will be averaged across batch size. 'sum': The output will be summed across batch size. |
'mean'
|
label_smoothing
|
float
|
Float in |
0.0
|
Returns:
The Focal loss between y_true
and y_pred
Raises:
Type | Description |
---|---|
ValueError
|
If |
Source code in fastestimator/fastestimator/backend/_focal_loss.py
136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 |
|
pytorch_focal_loss
¶
Calculate the focal loss between two tensors.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
y_true
|
Tensor
|
Ground truth class labels with shape([batch_size, d0, .. dN]), which should take values of 1 or 0. |
required |
y_pred
|
Tensor
|
Prediction score for each class, with a shape like y_true. dtype: float32 or float16. |
required |
alpha
|
float
|
Weighting factor in range (0,1) to balance positive vs negative examples or (-1/None) to ignore. Default = 0.25 |
0.25
|
gamma
|
float
|
Exponent of the modulating factor (1 - p_t) to balance easy vs hard examples. |
2
|
from_logits
|
bool
|
Whether y_pred is logits (without sigmoid). |
False
|
Returns: Loss tensor.
Source code in fastestimator/fastestimator/backend/_focal_loss.py
tf_focal_loss
¶
Computes the binary focal crossentropy loss.
According to Lin et al., 2018, it helps to apply a focal factor to down-weight easy examples and focus more on hard examples. By default, the focal tensor is computed as follows:
focal_factor = (1 - output) ** gamma
for class 1
focal_factor = output ** gamma
for class 0
where gamma
is a focusing parameter. When gamma
= 0, there is no focal
effect on the binary crossentropy loss.
If apply_class_balancing == True
, this function also takes into account a
weight balancing factor for the binary classes 0 and 1 as follows:
weight = alpha
for class 1 (target == 1
)
weight = 1 - alpha
for class 0
where alpha
is a float in the range of [0, 1]
.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
y_true
|
Ground truth values, of shape |
required | |
y_pred
|
The predicted values, of shape |
required | |
alpha
|
A weight balancing factor for class 1, default is |
0.25
|
|
gamma
|
A focusing parameter, default is |
2.0
|
|
from_logits
|
Whether |
False
|
|
label_smoothing
|
Float in |
0.0
|
Returns:
Type | Description |
---|---|
Binary focal crossentropy loss value |
|
with shape = |
Example:
y_true = [[0, 1], [0, 0]] y_pred = [[0.6, 0.4], [0.4, 0.6]] loss = tf_focal_loss(y_true, y_pred, gamma=2) assert loss.shape == (2,) loss array([0.330, 0.206], dtype=float32)