util
Utilities for FastEstimator.
Timer
¶
Bases: ContextDecorator
A class that can be used to time things.
This class is intentionally not @traceable.
x = lambda: list(map(lambda i: i + i/2, list(range(int(1e6)))))
with fe.util.Timer():
x() # Task took 0.1639 seconds
@fe.util.Timer("T2")
def func():
return x()
func() # T2 took 0.14819 seconds
Source code in fastestimator\fastestimator\util\util.py
cpu_count
¶
Determine the nuber of available CPUs (correcting for docker container limits).
Parameters:
Name | Type | Description | Default |
---|---|---|---|
limit |
Optional[int]
|
If provided, the TF and Torch backends will be told to use |
None
|
Returns:
Type | Description |
---|---|
int
|
The nuber of available CPUs (correcting for docker container limits), or the user provided |
Raises:
Type | Description |
---|---|
ValueError
|
If a |
Source code in fastestimator\fastestimator\util\util.py
draw
¶
get_batch_size
¶
Infer batch size from a batch dictionary. It will ignore all dictionary value with data type that doesn't have "shape" attribute.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
data |
Dict[str, Any]
|
The batch dictionary. |
required |
Returns:
Type | Description |
---|---|
int
|
batch size. |
Source code in fastestimator\fastestimator\util\util.py
get_num_devices
¶
Determine the number of available GPUs.
Returns:
Type | Description |
---|---|
The number of available GPUs, or 1 if none are found. |
pad_batch
¶
A function to pad a batch of data in-place by appending to the ends of the tensors. Tensor type needs to be numpy array otherwise would get ignored. (tf.Tensor and torch.Tensor will cause error)
data = [{"x": np.ones((2, 2)), "y": 8}, {"x": np.ones((3, 1)), "y": 4}]
fe.util.pad_batch(data, pad_value=0)
print(data) # [{'x': [[1., 1.], [1., 1.], [0., 0.]], 'y': 8}, {'x': [[1., 0.], [1., 0.], [1., 0.]]), 'y': 4}]
Parameters:
Name | Type | Description | Default |
---|---|---|---|
batch |
List[MutableMapping[str, np.ndarray]]
|
A list of data to be padded. |
required |
pad_value |
Union[float, int]
|
The value to pad with. |
required |
Raises:
Type | Description |
---|---|
AssertionError
|
If the data within the batch do not have matching rank, or have different keys |
Source code in fastestimator\fastestimator\util\util.py
pad_data
¶
Pad data
by appending pad_value
s along it's dimensions until the target_shape
is reached. All entries of
target_shape should be larger than the data.shape, and have the same rank.
x = np.ones((1,2))
x = fe.util.pad_data(x, target_shape=(3, 3), pad_value = -2) # [[1, 1, -2], [-2, -2, -2], [-2, -2, -2]]
x = fe.util.pad_data(x, target_shape=(3, 3, 3), pad_value = -2) # error
x = fe.util.pad_data(x, target_shape=(4, 1), pad_value = -2) # error
Parameters:
Name | Type | Description | Default |
---|---|---|---|
data |
np.ndarray
|
The data to be padded. |
required |
target_shape |
Tuple[int, ...]
|
The desired shape for |
required |
pad_value |
Union[float, int]
|
The value to insert into |
required |
Returns:
Type | Description |
---|---|
np.ndarray
|
The |
Source code in fastestimator\fastestimator\util\util.py
to_number
¶
Convert an input value into a Numpy ndarray.
This method can be used with Python and Numpy data:
b = fe.backend.to_number(5) # 5 (type==np.ndarray)
b = fe.backend.to_number(4.0) # 4.0 (type==np.ndarray)
n = np.array([1, 2, 3])
b = fe.backend.to_number(n) # [1, 2, 3] (type==np.ndarray)
This method can be used with TensorFlow tensors:
This method can be used with PyTorch tensors:
Parameters:
Name | Type | Description | Default |
---|---|---|---|
data |
Union[tf.Tensor, torch.Tensor, np.ndarray, int, float, str]
|
The value to be converted into a np.ndarray. |
required |
Returns:
Type | Description |
---|---|
np.ndarray
|
An ndarray corresponding to the given |