f1_score
F1Score
¶
Bases: Trace
Calculate the F1 score for a classification task and report it back to the logger.
Consider using MCC instead: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6941312/
Parameters:
Name | Type | Description | Default |
---|---|---|---|
true_key |
str
|
Name of the key that corresponds to ground truth in the batch dictionary. |
required |
pred_key |
str
|
Name of the key that corresponds to predicted score in the batch dictionary. |
required |
mode |
Union[None, str, Iterable[str]]
|
What mode(s) to execute this Trace in. For example, "train", "eval", "test", or "infer". To execute regardless of mode, pass None. To execute in all modes except for a particular one, you can pass an argument like "!infer" or "!train". |
('eval', 'test')
|
ds_id |
Union[None, str, Iterable[str]]
|
What dataset id(s) to execute this Trace in. To execute regardless of ds_id, pass None. To execute in all ds_ids except for a particular one, you can pass an argument like "!ds1". |
None
|
output_name |
str
|
Name of the key to store back to the state. |
'f1_score'
|
per_ds |
bool
|
Whether to automatically compute this metric individually for every ds_id it runs on, in addition to
computing an aggregate across all ds_ids on which it runs. This is automatically False if |
True
|
**kwargs |
Additional keyword arguments that pass to sklearn.metrics.f1_score() |
{}
|
Raises:
Type | Description |
---|---|
ValueError
|
One of ["y_pred", "y_true", "average"] argument exists in |
Source code in fastestimator/fastestimator/trace/metric/f1_score.py
check_kwargs
staticmethod
¶
Check if kwargs
has any blacklist argument and raise an error if it does.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
kwargs |
Dict[str, Any]
|
Keywork arguments to be examined. |
required |
Raises:
Type | Description |
---|---|
ValueError
|
One of ["y_true", "y_pred", "average"] argument exists in |