Preliminary Setup¶
Let's gather some datasets and get some imports out of the way
from fastestimator import Pipeline
from fastestimator.dataset.data import cifair10
from fastestimator.op.numpyop import LambdaOp, NumpyOp
from fastestimator.op.numpyop.multivariate import HorizontalFlip, Rotate, VerticalFlip
from fastestimator.op.numpyop.univariate import Blur
from fastestimator.util import to_number, BatchDisplay, GridDisplay
train_data, eval_data = cifair10.load_data()
class AddOne(NumpyOp):
def __init__(self, inputs, outputs, mode = None):
super().__init__(inputs, outputs, mode)
def forward(self, data, state):
return data + 1
Meta Op Overview¶
We learned about the operator structure in Beginner Tutorial 3. Operators are used to build complex computation graphs in FastEstimator.
Meta Ops are Operators which take other Operators as inputs and modify their functionality. These can allow for much more complicated computation graphs, as we will see in the following examples. They are available as both NumpyOps for use in a Pipeline
, and as TensorOps for use in a Network
.
Sometimes¶
Sometimes
is a meta op which applies a given Op with a specified probability, by default 50% of the time. The Sometimes
Op cannot be used to create keys which do not already exist in the data dictionary, since then it would not be clear what should be done when the Op decides not to execute. One convenient way to create default values is to first use a LambdaOp
, as described in Advanced Tutorial 3.
from fastestimator.op.numpyop.meta import Sometimes # Note that there is also a Sometimes in tensorop.meta
pipeline = Pipeline(train_data=train_data,
eval_data=eval_data,
batch_size=4,
ops=[LambdaOp(fn=lambda x: x, inputs="x", outputs="x_out"),
Sometimes(HorizontalFlip(image_in="x", image_out="x_out", mode="train"), prob=0.5)
]
)
data = pipeline.get_results()
fig = GridDisplay([BatchDisplay(image=data["x"], title="x"),
BatchDisplay(image=data["x_out"], title="x_out")
])
fig.show()
OneOf¶
OneOf
takes a list of Ops for input, and randomly chooses one of them every step to be executed. The Ops to be selected between must all share the same inputs, outputs, and modes.
from fastestimator.op.numpyop.meta import OneOf # Note that there is also a OneOf in tensorop.meta
pipeline = Pipeline(train_data=train_data,
eval_data=eval_data,
batch_size=4,
ops=[LambdaOp(fn=lambda x: x, inputs="x", outputs="x_out"),
OneOf(Rotate(image_in="x", image_out="x_out", mode="train", limit=45),
VerticalFlip(image_in="x", image_out="x_out", mode="train"),
Blur(inputs="x", outputs="x_out", mode="train", blur_limit=7))
]
)
data = pipeline.get_results()
fig = GridDisplay([BatchDisplay(image=data["x"], title="x"),
BatchDisplay(image=data["x_out"], title="x_out")
])
fig.show()
Repeat¶
Repeat
takes an Op and runs it multiple times in a row. It can be set to repeat for a fixed (static) number of times, or to repeat until a given input function evaluates to False (dynamic). Repeat
will always evaluate at least once. After performing each forward pass, it will check to see whether the stopping criteria have been met. If using an input function to determine the stopping criteria, any input arguments to that function will be looked up by name from the data dictionary and passed through to the function for evaluation.
Static¶
We will start with a static example of the Repeat
Op, which will always run 5 times:
from fastestimator.op.numpyop.meta import Repeat # Note that there is also a Repeat in tensorop.meta
pipeline = Pipeline(train_data=train_data,
eval_data=eval_data,
batch_size=4,
ops=[LambdaOp(fn=lambda: 0, outputs="z"),
Repeat(AddOne(inputs="z", outputs="z"), repeat=5)
]
)
data = pipeline.get_results()
print(data['z'])
tensor([5, 5, 5, 5])
Dynamic¶
Now lets see an example of a dynamic repeat op, which uses a lambda function to determine when it should stop. In this case, the repeat will continue so long as z is less than 6.5:
from fastestimator.op.numpyop.meta import Repeat # Note that there is also a Repeat in tensorop.meta
pipeline = Pipeline(train_data=train_data,
eval_data=eval_data,
batch_size=4,
ops=[LambdaOp(fn=lambda: 0, outputs="z"),
Repeat(AddOne(inputs="z", outputs="z"), repeat=lambda z: z < 6.5)
]
)
data = pipeline.get_results()
print(data['z'])
tensor([7, 7, 7, 7])
Fuse¶
Fuse
takes a list of Ops and combines them together into a single Op. All of the fused Ops must have the same mode. This can be useful in conjunction with the other Meta Ops. For example, suppose you have Op A and Op B, and want to run Sometimes(A) but only want B to execute when A is chosen to run by the Sometimes. You could then run Sometimes(Fuse(A,B)). Or if you wanted to perform mini-batch training within a network, you could do something like Repeat(Fuse(Model, Loss, Update)). Let's try an example where we either leave an image alone, or perform both a horizontal and vertical flip on it:
from fastestimator.op.numpyop.meta import Sometimes, Fuse # Note that Sometimes and Fuse are also available in tensorop.meta
pipeline = Pipeline(train_data=train_data,
eval_data=eval_data,
batch_size=4,
ops=[LambdaOp(fn=lambda x: x, inputs="x", outputs="x_out"),
Sometimes(
Fuse([
HorizontalFlip(image_in="x", image_out="x_out", mode="train"),
VerticalFlip(image_in="x_out", image_out="x_out", mode="train")]))
]
)
data = pipeline.get_results()
fig = GridDisplay([BatchDisplay(image=data["x"], title="x"),
BatchDisplay(image=data["x_out"], title="x_out")
])
fig.show()
Apphub Examples¶
You can find some practical examples of the concepts described here in the following FastEstimator Apphubs: