Search API¶
There are many things in life that requires searching for an optimal solution in a given space, regardless of whether deep learning is involved. For example:
- what is the
x
that leads to the minimal value of(x-3)**2
? - what is the best
learning rate
andbatch size
combo that can produce the lowest evaluation loss after 2 epochs of training? - what is the best augmentation magnitude that can lead to the highest evaluation accuracy?
The fe.search
API is designed to make the search easier, the API can be used independently for any search problem, as it only requires the following two components:
- objective function to measure the score of a solution.
- whether a maximum or minimum score is desired.
We will start with a simple example using Grid Search
. Say we want to find the x
that produces the minimal value of (x-3)**2
, where x is chosen from the list: [0.5, 1.5, 2.9, 4, 5.3]
from fastestimator.search import GridSearch
def objective_fn(search_idx, x):
return {"objective": (x-3)**2}
grid_search = GridSearch(eval_fn=objective_fn, params={"x": [0.5, 1.5, 2.9, 4, 5.3]})
Note that in the score function, one of the arguments must be search_idx
. This is to help user differentiate multiple search runs. To run the search, simply call:
grid_search.fit()
FastEstimator-Search: Evaluated {'x': 0.5, 'search_idx': 1}, result: {'objective': 6.25} FastEstimator-Search: Evaluated {'x': 1.5, 'search_idx': 2}, result: {'objective': 2.25} FastEstimator-Search: Evaluated {'x': 2.9, 'search_idx': 3}, result: {'objective': 0.010000000000000018} FastEstimator-Search: Evaluated {'x': 4, 'search_idx': 4}, result: {'objective': 1} FastEstimator-Search: Evaluated {'x': 5.3, 'search_idx': 5}, result: {'objective': 5.289999999999999}
Getting the search results¶
After the search is done, you can also call the search.get_best_results
or search.get_search_results
to see the best and overall search history:
print("best search result:")
print(grid_search.get_best_results(best_mode="min", optimize_field="objective"))
best search result: {'param': {'x': 2.9, 'search_idx': 3}, 'result': {'objective': 0.010000000000000018}}
print("search history:")
print(grid_search.get_search_summary())
search history: [{'param': {'x': 0.5, 'search_idx': 1}, 'result': {'objective': 6.25}}, {'param': {'x': 1.5, 'search_idx': 2}, 'result': {'objective': 2.25}}, {'param': {'x': 2.9, 'search_idx': 3}, 'result': {'objective': 0.010000000000000018}}, {'param': {'x': 4, 'search_idx': 4}, 'result': {'objective': 1}}, {'param': {'x': 5.3, 'search_idx': 5}, 'result': {'objective': 5.289999999999999}}]
Saving and loading search results¶
Once the search is done, you can also save the search results into the disk and later load them back using save
and load
methods:
import tempfile
save_dir = tempfile.mkdtemp()
# save the state to save_dir
grid_search.save(save_dir)
# instantiate a new object
grid_search2 = GridSearch(eval_fn=objective_fn, params={"x": [0.5, 1.5, 2.9, 4, 5.3]})
# load the previously saved state
grid_search2.load(save_dir)
# display the best result of the loaded instance
print(grid_search2.get_best_results(best_mode="min", optimize_field="objective"))
# display the search summary of the loadeded instance
print(grid_search2.get_search_summary())
FastEstimator-Search: Saving the search summary to /var/folders/3r/h9kh47050gv6rbt_pgf8cl540000gn/T/tmpy3xpegbx/grid_search.json FastEstimator-Search: Loading the search state from /var/folders/3r/h9kh47050gv6rbt_pgf8cl540000gn/T/tmpy3xpegbx/grid_search.json {'param': {'x': 2.9, 'search_idx': 3}, 'result': {'objective': 0.010000000000000018}} [{'param': {'x': 0.5, 'search_idx': 1}, 'result': {'objective': 6.25}}, {'param': {'x': 1.5, 'search_idx': 2}, 'result': {'objective': 2.25}}, {'param': {'x': 2.9, 'search_idx': 3}, 'result': {'objective': 0.010000000000000018}}, {'param': {'x': 4, 'search_idx': 4}, 'result': {'objective': 1}}, {'param': {'x': 5.3, 'search_idx': 5}, 'result': {'objective': 5.289999999999999}}]
Interruption-resilient search¶
When you run search on a hardware that can be interrupted (like an AWS spot instance), you can provide a save_dir
argument when calling fit
. As a result, the search will automatically back up its result after each evaluation. Furthermore, when calling fit
using the same save_dir
the second time, it will first load the search results and then pick up from where it left off.
To demonstrate this, we will use golden-section search on the same optimization problem. To simulate interruption, we will first iterate 10 times, then create a new instance and iterate another 10 times.
from fastestimator.search import GoldenSection
save_dir2 = tempfile.mkdtemp()
gs_search = GoldenSection(eval_fn=objective_fn,
x_min=0,
x_max=6,
max_iter=10,
integer=False,
optimize_field="objective",
best_mode="min")
gs_search.fit(save_dir=save_dir2)
FastEstimator-Search: Saving the search summary to /var/folders/3r/h9kh47050gv6rbt_pgf8cl540000gn/T/tmpkyddkubn/golden_section_search.json FastEstimator-Search: Evaluated {'x': 2.2917960675006306, 'search_idx': 1}, result: {'objective': 0.5015528100075713} FastEstimator-Search: Saving the search summary to /var/folders/3r/h9kh47050gv6rbt_pgf8cl540000gn/T/tmpkyddkubn/golden_section_search.json FastEstimator-Search: Evaluated {'x': 3.7082039324993694, 'search_idx': 2}, result: {'objective': 0.5015528100075713} FastEstimator-Search: Saving the search summary to /var/folders/3r/h9kh47050gv6rbt_pgf8cl540000gn/T/tmpkyddkubn/golden_section_search.json FastEstimator-Search: Evaluated {'x': 4.583592135001262, 'search_idx': 3}, result: {'objective': 2.5077640500378555} FastEstimator-Search: Saving the search summary to /var/folders/3r/h9kh47050gv6rbt_pgf8cl540000gn/T/tmpkyddkubn/golden_section_search.json FastEstimator-Search: Evaluated {'x': 3.1671842700025232, 'search_idx': 4}, result: {'objective': 0.027950580136276586} FastEstimator-Search: Saving the search summary to /var/folders/3r/h9kh47050gv6rbt_pgf8cl540000gn/T/tmpkyddkubn/golden_section_search.json FastEstimator-Search: Evaluated {'x': 2.832815729997476, 'search_idx': 5}, result: {'objective': 0.027950580136276885} FastEstimator-Search: Saving the search summary to /var/folders/3r/h9kh47050gv6rbt_pgf8cl540000gn/T/tmpkyddkubn/golden_section_search.json FastEstimator-Search: Evaluated {'x': 3.3738353924943216, 'search_idx': 6}, result: {'objective': 0.1397529006813835} FastEstimator-Search: Saving the search summary to /var/folders/3r/h9kh47050gv6rbt_pgf8cl540000gn/T/tmpkyddkubn/golden_section_search.json FastEstimator-Search: Evaluated {'x': 3.0394668524892743, 'search_idx': 7}, result: {'objective': 0.0015576324454101358} FastEstimator-Search: Saving the search summary to /var/folders/3r/h9kh47050gv6rbt_pgf8cl540000gn/T/tmpkyddkubn/golden_section_search.json FastEstimator-Search: Evaluated {'x': 2.9605331475107253, 'search_idx': 8}, result: {'objective': 0.0015576324454101708} FastEstimator-Search: Saving the search summary to /var/folders/3r/h9kh47050gv6rbt_pgf8cl540000gn/T/tmpkyddkubn/golden_section_search.json FastEstimator-Search: Evaluated {'x': 3.0882505650239747, 'search_idx': 9}, result: {'objective': 0.00778816222705078} FastEstimator-Search: Saving the search summary to /var/folders/3r/h9kh47050gv6rbt_pgf8cl540000gn/T/tmpkyddkubn/golden_section_search.json FastEstimator-Search: Evaluated {'x': 3.009316860045425, 'search_idx': 10}, result: {'objective': 8.68038811060405e-05} FastEstimator-Search: Saving the search summary to /var/folders/3r/h9kh47050gv6rbt_pgf8cl540000gn/T/tmpkyddkubn/golden_section_search.json FastEstimator-Search: Evaluated {'x': 2.9906831399545744, 'search_idx': 11}, result: {'objective': 8.680388110604876e-05} FastEstimator-Search: Saving the search summary to /var/folders/3r/h9kh47050gv6rbt_pgf8cl540000gn/T/tmpkyddkubn/golden_section_search.json FastEstimator-Search: Evaluated {'x': 3.0208331323984234, 'search_idx': 12}, result: {'objective': 0.0004340194055302403} FastEstimator-Search: Golden Section Search Finished, best parameters: {'x': 3.009316860045425, 'search_idx': 10}, best result: {'objective': 8.68038811060405e-05}
After interruption, we can create the instance and call fit
on the same directory:
gs_search2 = GoldenSection(eval_fn=objective_fn,
x_min=0,
x_max=6,
max_iter=20,
integer=False,
optimize_field="objective",
best_mode="min")
gs_search2.fit(save_dir=save_dir2)
FastEstimator-Search: Loading the search state from /var/folders/3r/h9kh47050gv6rbt_pgf8cl540000gn/T/tmpkyddkubn/golden_section_search.json FastEstimator-Search: Saving the search summary to /var/folders/3r/h9kh47050gv6rbt_pgf8cl540000gn/T/tmpkyddkubn/golden_section_search.json FastEstimator-Search: Evaluated {'x': 3.002199412307572, 'search_idx': 13}, result: {'objective': 4.8374144986998325e-06} FastEstimator-Search: Saving the search summary to /var/folders/3r/h9kh47050gv6rbt_pgf8cl540000gn/T/tmpkyddkubn/golden_section_search.json FastEstimator-Search: Evaluated {'x': 2.997800587692428, 'search_idx': 14}, result: {'objective': 4.8374144986998325e-06} FastEstimator-Search: Saving the search summary to /var/folders/3r/h9kh47050gv6rbt_pgf8cl540000gn/T/tmpkyddkubn/golden_section_search.json FastEstimator-Search: Evaluated {'x': 3.0049180354302814, 'search_idx': 15}, result: {'objective': 2.4187072493502697e-05} FastEstimator-Search: Saving the search summary to /var/folders/3r/h9kh47050gv6rbt_pgf8cl540000gn/T/tmpkyddkubn/golden_section_search.json FastEstimator-Search: Evaluated {'x': 3.0005192108151366, 'search_idx': 16}, result: {'objective': 2.695798705548303e-07} FastEstimator-Search: Saving the search summary to /var/folders/3r/h9kh47050gv6rbt_pgf8cl540000gn/T/tmpkyddkubn/golden_section_search.json FastEstimator-Search: Evaluated {'x': 2.9994807891848634, 'search_idx': 17}, result: {'objective': 2.695798705548303e-07} FastEstimator-Search: Saving the search summary to /var/folders/3r/h9kh47050gv6rbt_pgf8cl540000gn/T/tmpkyddkubn/golden_section_search.json FastEstimator-Search: Evaluated {'x': 3.001160990677299, 'search_idx': 18}, result: {'objective': 1.3478993527749865e-06} FastEstimator-Search: Saving the search summary to /var/folders/3r/h9kh47050gv6rbt_pgf8cl540000gn/T/tmpkyddkubn/golden_section_search.json FastEstimator-Search: Evaluated {'x': 3.0001225690470252, 'search_idx': 19}, result: {'objective': 1.502317128867399e-08} FastEstimator-Search: Saving the search summary to /var/folders/3r/h9kh47050gv6rbt_pgf8cl540000gn/T/tmpkyddkubn/golden_section_search.json FastEstimator-Search: Evaluated {'x': 2.9998774309529748, 'search_idx': 20}, result: {'objective': 1.502317128867399e-08} FastEstimator-Search: Saving the search summary to /var/folders/3r/h9kh47050gv6rbt_pgf8cl540000gn/T/tmpkyddkubn/golden_section_search.json FastEstimator-Search: Evaluated {'x': 3.000274072721086, 'search_idx': 21}, result: {'objective': 7.511585644356621e-08} FastEstimator-Search: Saving the search summary to /var/folders/3r/h9kh47050gv6rbt_pgf8cl540000gn/T/tmpkyddkubn/golden_section_search.json FastEstimator-Search: Evaluated {'x': 3.0000289346270352, 'search_idx': 22}, result: {'objective': 8.372126416682983e-10} FastEstimator-Search: Golden Section Search Finished, best parameters: {'x': 3.0000289346270352, 'search_idx': 22}, best result: {'objective': 8.372126416682983e-10}
As we can see, the search started from search index 13 and proceeded for another 10 iterations.
Example 1: Hyperparameter Tuning by Grid Search¶
In this example, we will use GridSearch
on a real deep learning task to illustrate its usage. Based on number of hyperparameters, the grid search is performed accordingly.
import tensorflow as tf
import fastestimator as fe
from fastestimator.architecture.tensorflow import LeNet
from fastestimator.dataset.data import mnist
from fastestimator.op.numpyop.univariate import ExpandDims, Minmax, RUA
from fastestimator.op.tensorop.loss import CrossEntropy
from fastestimator.op.tensorop.model import ModelOp, UpdateOp
def get_hypara_tuning_estimator(batch_size, lr, choice):
pipeline_ops = []
if choice and isinstance(choice, str):
pipeline_ops = [RUA(inputs="x", outputs="x", mode="train", choices=[choice])]
pipeline_ops = pipeline_ops + [ExpandDims(inputs="x", outputs="x"), Minmax(inputs="x", outputs="x")]
train_data, test_data = mnist.load_data()
pipeline = fe.Pipeline(train_data=train_data,
test_data=test_data,
batch_size=batch_size,
ops=pipeline_ops,
num_process=0)
model = fe.build(model_fn=LeNet, optimizer_fn=lambda: tf.optimizers.Adam(lr))
network = fe.Network(ops=[
ModelOp(model=model, inputs="x", outputs="y_pred"),
CrossEntropy(inputs=("y_pred", "y"), outputs="ce"),
UpdateOp(model=model, loss_name="ce")
])
estimator = fe.Estimator(pipeline=pipeline, network=network, epochs=1, train_steps_per_epoch=500)
return estimator
Given a batch size grid [32, 64]
, we are interested in the optimial parameter that leads to the lowest test loss after 500 steps of training on MNIST dataset.
def eval_fn_v1(search_idx, batch_size):
est = get_hypara_tuning_estimator(batch_size, lr=1e-3, choice=None)
est.fit(warmup=False)
hist = est.test(summary="myexp")
loss = float(hist.history["test"]["ce"][500])
return {"test_loss": loss}
mnist_grid_search_single = GridSearch(eval_fn=eval_fn_v1, params={"batch_size": [32, 64]})
mnist_grid_search_single.fit()
mnist_grid_search_single.get_best_results(best_mode="min", optimize_field="test_loss")
2022-05-23 15:55:03.944092: I tensorflow/core/platform/cpu_feature_guard.cc:151] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2022-05-23 15:55:04.016652: W tensorflow/python/util/util.cc:368] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.
______ __ ______ __ _ __ / ____/___ ______/ /_/ ____/____/ /_(_)___ ___ ____ _/ /_____ _____ / /_ / __ `/ ___/ __/ __/ / ___/ __/ / __ `__ \/ __ `/ __/ __ \/ ___/ / __/ / /_/ (__ ) /_/ /___(__ ) /_/ / / / / / / /_/ / /_/ /_/ / / /_/ \__,_/____/\__/_____/____/\__/_/_/ /_/ /_/\__,_/\__/\____/_/ FastEstimator-Warn: No ModelSaver Trace detected. Models will not be saved. FastEstimator-Start: step: 1; logging_interval: 100; num_device: 0; FastEstimator-Train: step: 1; ce: 2.2978234; FastEstimator-Train: step: 100; ce: 0.5928297; steps/sec: 68.95; FastEstimator-Train: step: 200; ce: 0.17800228; steps/sec: 67.77; FastEstimator-Train: step: 300; ce: 0.17582887; steps/sec: 66.81; FastEstimator-Train: step: 400; ce: 0.09949152; steps/sec: 69.79; FastEstimator-Train: step: 500; ce: 0.21450083; steps/sec: 68.93; FastEstimator-Train: step: 500; epoch: 1; epoch_time: 10.47 sec; FastEstimator-Finish: step: 500; model_lr: 0.001; total_time: 10.48 sec; FastEstimator-Test: step: 500; epoch: 1; ce: 0.1302412; FastEstimator-Search: Evaluated {'batch_size': 32, 'search_idx': 1}, result: {'test_loss': 0.13024120032787323} ______ __ ______ __ _ __ / ____/___ ______/ /_/ ____/____/ /_(_)___ ___ ____ _/ /_____ _____ / /_ / __ `/ ___/ __/ __/ / ___/ __/ / __ `__ \/ __ `/ __/ __ \/ ___/ / __/ / /_/ (__ ) /_/ /___(__ ) /_/ / / / / / / /_/ / /_/ /_/ / / /_/ \__,_/____/\__/_____/____/\__/_/_/ /_/ /_/\__,_/\__/\____/_/ FastEstimator-Warn: No ModelSaver Trace detected. Models will not be saved. FastEstimator-Start: step: 1; logging_interval: 100; num_device: 0; FastEstimator-Train: step: 1; ce: 2.3029552; FastEstimator-Train: step: 100; ce: 0.24441192; steps/sec: 40.24; FastEstimator-Train: step: 200; ce: 0.13249156; steps/sec: 38.96; FastEstimator-Train: step: 300; ce: 0.081911236; steps/sec: 38.28; FastEstimator-Train: step: 400; ce: 0.10891421; steps/sec: 39.34; FastEstimator-Train: step: 500; ce: 0.12866744; steps/sec: 40.82; FastEstimator-Train: step: 500; epoch: 1; epoch_time: 13.1 sec; FastEstimator-Finish: step: 500; model_lr: 0.001; total_time: 13.11 sec; FastEstimator-Test: step: 500; epoch: 1; ce: 0.07181324; FastEstimator-Search: Evaluated {'batch_size': 64, 'search_idx': 2}, result: {'test_loss': 0.07181324064731598}
{'param': {'batch_size': 64, 'search_idx': 2}, 'result': {'test_loss': 0.07181324064731598}}
Given a batch size grid [32, 64]
and learning rate grid [1e-2 and 1e-3]
, we are interested in the optimial parameter that leads to the lowest test loss after 500 steps of training on MNIST dataset.
def eval_fn_v2(search_idx, batch_size, lr):
est = get_hypara_tuning_estimator(batch_size, lr=lr, choice=None)
est.fit(warmup=False)
hist = est.test(summary="myexp")
loss = float(hist.history["test"]["ce"][500])
return {"test_loss": loss}
mnist_grid_search_double = GridSearch(eval_fn=eval_fn_v2, params={"batch_size": [32, 64], "lr": [1e-2, 1e-3]})
mnist_grid_search_double.fit()
mnist_grid_search_double.get_best_results(best_mode="min", optimize_field="test_loss")
______ __ ______ __ _ __ / ____/___ ______/ /_/ ____/____/ /_(_)___ ___ ____ _/ /_____ _____ / /_ / __ `/ ___/ __/ __/ / ___/ __/ / __ `__ \/ __ `/ __/ __ \/ ___/ / __/ / /_/ (__ ) /_/ /___(__ ) /_/ / / / / / / /_/ / /_/ /_/ / / /_/ \__,_/____/\__/_____/____/\__/_/_/ /_/ /_/\__,_/\__/\____/_/ FastEstimator-Warn: No ModelSaver Trace detected. Models will not be saved. FastEstimator-Start: step: 1; logging_interval: 100; num_device: 0; FastEstimator-Train: step: 1; ce: 2.297504; FastEstimator-Train: step: 100; ce: 0.23087856; steps/sec: 64.66; FastEstimator-Train: step: 200; ce: 0.04770284; steps/sec: 59.33; FastEstimator-Train: step: 300; ce: 0.065143056; steps/sec: 63.28; FastEstimator-Train: step: 400; ce: 0.043231085; steps/sec: 63.76; FastEstimator-Train: step: 500; ce: 0.57282686; steps/sec: 61.11; FastEstimator-Train: step: 500; epoch: 1; epoch_time: 8.44 sec; FastEstimator-Finish: step: 500; model_lr: 0.01; total_time: 8.45 sec; FastEstimator-Test: step: 500; epoch: 1; ce: 0.1802756; FastEstimator-Search: Evaluated {'batch_size': 32, 'lr': 0.01, 'search_idx': 1}, result: {'test_loss': 0.18027560412883759} ______ __ ______ __ _ __ / ____/___ ______/ /_/ ____/____/ /_(_)___ ___ ____ _/ /_____ _____ / /_ / __ `/ ___/ __/ __/ / ___/ __/ / __ `__ \/ __ `/ __/ __ \/ ___/ / __/ / /_/ (__ ) /_/ /___(__ ) /_/ / / / / / / /_/ / /_/ /_/ / / /_/ \__,_/____/\__/_____/____/\__/_/_/ /_/ /_/\__,_/\__/\____/_/ FastEstimator-Warn: No ModelSaver Trace detected. Models will not be saved. FastEstimator-Start: step: 1; logging_interval: 100; num_device: 0; FastEstimator-Train: step: 1; ce: 2.288842; FastEstimator-Train: step: 100; ce: 0.22141448; steps/sec: 62.89; FastEstimator-Train: step: 200; ce: 0.30901936; steps/sec: 63.56; FastEstimator-Train: step: 300; ce: 0.18143213; steps/sec: 62.37; FastEstimator-Train: step: 400; ce: 0.29459214; steps/sec: 61.98; FastEstimator-Train: step: 500; ce: 0.27240336; steps/sec: 61.76; FastEstimator-Train: step: 500; epoch: 1; epoch_time: 8.42 sec; FastEstimator-Finish: step: 500; model_lr: 0.001; total_time: 8.43 sec; FastEstimator-Test: step: 500; epoch: 1; ce: 0.100863606; FastEstimator-Search: Evaluated {'batch_size': 32, 'lr': 0.001, 'search_idx': 2}, result: {'test_loss': 0.10086360573768616} ______ __ ______ __ _ __ / ____/___ ______/ /_/ ____/____/ /_(_)___ ___ ____ _/ /_____ _____ / /_ / __ `/ ___/ __/ __/ / ___/ __/ / __ `__ \/ __ `/ __/ __ \/ ___/ / __/ / /_/ (__ ) /_/ /___(__ ) /_/ / / / / / / /_/ / /_/ /_/ / / /_/ \__,_/____/\__/_____/____/\__/_/_/ /_/ /_/\__,_/\__/\____/_/ FastEstimator-Warn: No ModelSaver Trace detected. Models will not be saved. FastEstimator-Start: step: 1; logging_interval: 100; num_device: 0; FastEstimator-Train: step: 1; ce: 2.275618; FastEstimator-Train: step: 100; ce: 0.08152926; steps/sec: 39.58; FastEstimator-Train: step: 200; ce: 0.08025998; steps/sec: 39.01; FastEstimator-Train: step: 300; ce: 0.09793242; steps/sec: 37.88; FastEstimator-Train: step: 400; ce: 0.044547416; steps/sec: 38.41; FastEstimator-Train: step: 500; ce: 0.16613436; steps/sec: 38.42; FastEstimator-Train: step: 500; epoch: 1; epoch_time: 13.39 sec; FastEstimator-Finish: step: 500; model_lr: 0.01; total_time: 13.4 sec; FastEstimator-Test: step: 500; epoch: 1; ce: 0.08647069; FastEstimator-Search: Evaluated {'batch_size': 64, 'lr': 0.01, 'search_idx': 3}, result: {'test_loss': 0.08647069334983826} ______ __ ______ __ _ __ / ____/___ ______/ /_/ ____/____/ /_(_)___ ___ ____ _/ /_____ _____ / /_ / __ `/ ___/ __/ __/ / ___/ __/ / __ `__ \/ __ `/ __/ __ \/ ___/ / __/ / /_/ (__ ) /_/ /___(__ ) /_/ / / / / / / /_/ / /_/ /_/ / / /_/ \__,_/____/\__/_____/____/\__/_/_/ /_/ /_/\__,_/\__/\____/_/ FastEstimator-Warn: No ModelSaver Trace detected. Models will not be saved. FastEstimator-Start: step: 1; logging_interval: 100; num_device: 0; FastEstimator-Train: step: 1; ce: 2.3088133; FastEstimator-Train: step: 100; ce: 0.28321943; steps/sec: 38.87; FastEstimator-Train: step: 200; ce: 0.3613764; steps/sec: 35.5; FastEstimator-Train: step: 300; ce: 0.059347313; steps/sec: 33.92; FastEstimator-Train: step: 400; ce: 0.19349068; steps/sec: 33.5; FastEstimator-Train: step: 500; ce: 0.26929265; steps/sec: 32.52; FastEstimator-Train: step: 500; epoch: 1; epoch_time: 14.89 sec; FastEstimator-Finish: step: 500; model_lr: 0.001; total_time: 14.91 sec; FastEstimator-Test: step: 500; epoch: 1; ce: 0.07576594; FastEstimator-Search: Evaluated {'batch_size': 64, 'lr': 0.001, 'search_idx': 4}, result: {'test_loss': 0.0757659375667572}
{'param': {'batch_size': 64, 'lr': 0.001, 'search_idx': 4}, 'result': {'test_loss': 0.0757659375667572}}
Given a batch size grid [32, 64]
, learning rate grid [1e-2 and 1e-3]
and built-in augmentation ["Rotate", "Brightness"]
, we are interested in the optimial parameter that leads to the lowest test loss after 500 steps of training on MNIST dataset.
def eval_fn_v3(search_idx, batch_size, lr, choices):
est = get_hypara_tuning_estimator(batch_size, lr=lr, choice=choices)
est.fit(warmup=False)
hist = est.test(summary="myexp")
loss = float(hist.history["test"]["ce"][500])
return {"test_loss": loss}
mnist_grid_search_multi = GridSearch(
eval_fn=eval_fn_v3, params={
"batch_size": [32, 64], "lr": [1e-2, 1e-3], "choices": ["Rotate", "Brightness"]
})
mnist_grid_search_multi.fit()
mnist_grid_search_multi.get_best_results(best_mode="min", optimize_field="test_loss")
______ __ ______ __ _ __ / ____/___ ______/ /_/ ____/____/ /_(_)___ ___ ____ _/ /_____ _____ / /_ / __ `/ ___/ __/ __/ / ___/ __/ / __ `__ \/ __ `/ __/ __ \/ ___/ / __/ / /_/ (__ ) /_/ /___(__ ) /_/ / / / / / / /_/ / /_/ /_/ / / /_/ \__,_/____/\__/_____/____/\__/_/_/ /_/ /_/\__,_/\__/\____/_/ FastEstimator-Warn: No ModelSaver Trace detected. Models will not be saved. FastEstimator-Start: step: 1; logging_interval: 100; num_device: 0; FastEstimator-Train: step: 1; ce: 2.316989; FastEstimator-Train: step: 100; ce: 0.89612067; steps/sec: 61.37; FastEstimator-Train: step: 200; ce: 0.08497408; steps/sec: 58.87; FastEstimator-Train: step: 300; ce: 0.17728706; steps/sec: 55.34; FastEstimator-Train: step: 400; ce: 0.097662136; steps/sec: 54.78; FastEstimator-Train: step: 500; ce: 0.6184038; steps/sec: 54.37; FastEstimator-Train: step: 500; epoch: 1; epoch_time: 9.33 sec; FastEstimator-Finish: step: 500; model_lr: 0.01; total_time: 9.34 sec; FastEstimator-Test: step: 500; epoch: 1; ce: 0.15701194; FastEstimator-Search: Evaluated {'batch_size': 32, 'lr': 0.01, 'choices': 'Rotate', 'search_idx': 1}, result: {'test_loss': 0.157011941075325} ______ __ ______ __ _ __ / ____/___ ______/ /_/ ____/____/ /_(_)___ ___ ____ _/ /_____ _____ / /_ / __ `/ ___/ __/ __/ / ___/ __/ / __ `__ \/ __ `/ __/ __ \/ ___/ / __/ / /_/ (__ ) /_/ /___(__ ) /_/ / / / / / / /_/ / /_/ /_/ / / /_/ \__,_/____/\__/_____/____/\__/_/_/ /_/ /_/\__,_/\__/\____/_/ FastEstimator-Warn: No ModelSaver Trace detected. Models will not be saved. FastEstimator-Start: step: 1; logging_interval: 100; num_device: 0; FastEstimator-Train: step: 1; ce: 2.303073; FastEstimator-Train: step: 100; ce: 0.30278435; steps/sec: 46.5; FastEstimator-Train: step: 200; ce: 0.16040373; steps/sec: 46.01; FastEstimator-Train: step: 300; ce: 0.13413057; steps/sec: 46.08; FastEstimator-Train: step: 400; ce: 0.36688638; steps/sec: 42.99; FastEstimator-Train: step: 500; ce: 0.082864165; steps/sec: 41.49; FastEstimator-Train: step: 500; epoch: 1; epoch_time: 11.76 sec; FastEstimator-Finish: step: 500; model_lr: 0.01; total_time: 11.78 sec; FastEstimator-Test: step: 500; epoch: 1; ce: 0.12428217; FastEstimator-Search: Evaluated {'batch_size': 32, 'lr': 0.01, 'choices': 'Brightness', 'search_idx': 2}, result: {'test_loss': 0.12428216636180878} ______ __ ______ __ _ __ / ____/___ ______/ /_/ ____/____/ /_(_)___ ___ ____ _/ /_____ _____ / /_ / __ `/ ___/ __/ __/ / ___/ __/ / __ `__ \/ __ `/ __/ __ \/ ___/ / __/ / /_/ (__ ) /_/ /___(__ ) /_/ / / / / / / /_/ / /_/ /_/ / / /_/ \__,_/____/\__/_____/____/\__/_/_/ /_/ /_/\__,_/\__/\____/_/ FastEstimator-Warn: No ModelSaver Trace detected. Models will not be saved. FastEstimator-Start: step: 1; logging_interval: 100; num_device: 0; FastEstimator-Train: step: 1; ce: 2.3268635; FastEstimator-Train: step: 100; ce: 0.7491451; steps/sec: 58.88; FastEstimator-Train: step: 200; ce: 0.3961307; steps/sec: 55.83; FastEstimator-Train: step: 300; ce: 0.4441402; steps/sec: 52.79; FastEstimator-Train: step: 400; ce: 0.11637384; steps/sec: 51.09; FastEstimator-Train: step: 500; ce: 0.19636872; steps/sec: 48.29; FastEstimator-Train: step: 500; epoch: 1; epoch_time: 9.96 sec; FastEstimator-Finish: step: 500; model_lr: 0.001; total_time: 9.97 sec; FastEstimator-Test: step: 500; epoch: 1; ce: 0.21185064; FastEstimator-Search: Evaluated {'batch_size': 32, 'lr': 0.001, 'choices': 'Rotate', 'search_idx': 3}, result: {'test_loss': 0.21185064315795898} ______ __ ______ __ _ __ / ____/___ ______/ /_/ ____/____/ /_(_)___ ___ ____ _/ /_____ _____ / /_ / __ `/ ___/ __/ __/ / ___/ __/ / __ `__ \/ __ `/ __/ __ \/ ___/ / __/ / /_/ (__ ) /_/ /___(__ ) /_/ / / / / / / /_/ / /_/ /_/ / / /_/ \__,_/____/\__/_____/____/\__/_/_/ /_/ /_/\__,_/\__/\____/_/ FastEstimator-Warn: No ModelSaver Trace detected. Models will not be saved. FastEstimator-Start: step: 1; logging_interval: 100; num_device: 0; FastEstimator-Train: step: 1; ce: 2.3139453; FastEstimator-Train: step: 100; ce: 0.17447951; steps/sec: 42.92; FastEstimator-Train: step: 200; ce: 0.20068465; steps/sec: 42.0; FastEstimator-Train: step: 300; ce: 0.07889053; steps/sec: 41.77; FastEstimator-Train: step: 400; ce: 0.19607557; steps/sec: 41.3; FastEstimator-Train: step: 500; ce: 0.045420818; steps/sec: 40.3; FastEstimator-Train: step: 500; epoch: 1; epoch_time: 12.57 sec; FastEstimator-Finish: step: 500; model_lr: 0.001; total_time: 12.59 sec; FastEstimator-Test: step: 500; epoch: 1; ce: 0.10435359; FastEstimator-Search: Evaluated {'batch_size': 32, 'lr': 0.001, 'choices': 'Brightness', 'search_idx': 4}, result: {'test_loss': 0.10435359179973602} ______ __ ______ __ _ __ / ____/___ ______/ /_/ ____/____/ /_(_)___ ___ ____ _/ /_____ _____ / /_ / __ `/ ___/ __/ __/ / ___/ __/ / __ `__ \/ __ `/ __/ __ \/ ___/ / __/ / /_/ (__ ) /_/ /___(__ ) /_/ / / / / / / /_/ / /_/ /_/ / / /_/ \__,_/____/\__/_____/____/\__/_/_/ /_/ /_/\__,_/\__/\____/_/ FastEstimator-Warn: No ModelSaver Trace detected. Models will not be saved. FastEstimator-Start: step: 1; logging_interval: 100; num_device: 0; FastEstimator-Train: step: 1; ce: 2.3177319; FastEstimator-Train: step: 100; ce: 0.6719325; steps/sec: 32.37; FastEstimator-Train: step: 200; ce: 0.52074134; steps/sec: 28.28; FastEstimator-Train: step: 300; ce: 0.21777108; steps/sec: 28.32; FastEstimator-Train: step: 400; ce: 0.33475888; steps/sec: 27.19; FastEstimator-Train: step: 500; ce: 0.11661025; steps/sec: 26.83; FastEstimator-Train: step: 500; epoch: 1; epoch_time: 18.17 sec; FastEstimator-Finish: step: 500; model_lr: 0.01; total_time: 18.18 sec; FastEstimator-Test: step: 500; epoch: 1; ce: 0.15500401; FastEstimator-Search: Evaluated {'batch_size': 64, 'lr': 0.01, 'choices': 'Rotate', 'search_idx': 5}, result: {'test_loss': 0.15500400960445404} ______ __ ______ __ _ __ / ____/___ ______/ /_/ ____/____/ /_(_)___ ___ ____ _/ /_____ _____ / /_ / __ `/ ___/ __/ __/ / ___/ __/ / __ `__ \/ __ `/ __/ __ \/ ___/ / __/ / /_/ (__ ) /_/ /___(__ ) /_/ / / / / / / /_/ / /_/ /_/ / / /_/ \__,_/____/\__/_____/____/\__/_/_/ /_/ /_/\__,_/\__/\____/_/ FastEstimator-Warn: No ModelSaver Trace detected. Models will not be saved. FastEstimator-Start: step: 1; logging_interval: 100; num_device: 0; FastEstimator-Train: step: 1; ce: 2.3164172; FastEstimator-Train: step: 100; ce: 0.21421057; steps/sec: 26.94; FastEstimator-Train: step: 200; ce: 0.250572; steps/sec: 26.67; FastEstimator-Train: step: 300; ce: 0.16062501; steps/sec: 24.45; FastEstimator-Train: step: 400; ce: 0.057134755; steps/sec: 24.38; FastEstimator-Train: step: 500; ce: 0.046086866; steps/sec: 24.82; FastEstimator-Train: step: 500; epoch: 1; epoch_time: 20.29 sec; FastEstimator-Finish: step: 500; model_lr: 0.01; total_time: 20.3 sec; FastEstimator-Test: step: 500; epoch: 1; ce: 0.06220621; FastEstimator-Search: Evaluated {'batch_size': 64, 'lr': 0.01, 'choices': 'Brightness', 'search_idx': 6}, result: {'test_loss': 0.0622062087059021} ______ __ ______ __ _ __ / ____/___ ______/ /_/ ____/____/ /_(_)___ ___ ____ _/ /_____ _____ / /_ / __ `/ ___/ __/ __/ / ___/ __/ / __ `__ \/ __ `/ __/ __ \/ ___/ / __/ / /_/ (__ ) /_/ /___(__ ) /_/ / / / / / / /_/ / /_/ /_/ / / /_/ \__,_/____/\__/_____/____/\__/_/_/ /_/ /_/\__,_/\__/\____/_/ FastEstimator-Warn: No ModelSaver Trace detected. Models will not be saved. FastEstimator-Start: step: 1; logging_interval: 100; num_device: 0; FastEstimator-Train: step: 1; ce: 2.299392; FastEstimator-Train: step: 100; ce: 0.45705426; steps/sec: 34.21; FastEstimator-Train: step: 200; ce: 0.32576722; steps/sec: 30.81; FastEstimator-Train: step: 300; ce: 0.11331859; steps/sec: 29.75; FastEstimator-Train: step: 400; ce: 0.21821997; steps/sec: 28.17; FastEstimator-Train: step: 500; ce: 0.21981962; steps/sec: 27.09; FastEstimator-Train: step: 500; epoch: 1; epoch_time: 17.35 sec; FastEstimator-Finish: step: 500; model_lr: 0.001; total_time: 17.37 sec; FastEstimator-Test: step: 500; epoch: 1; ce: 0.15896726; FastEstimator-Search: Evaluated {'batch_size': 64, 'lr': 0.001, 'choices': 'Rotate', 'search_idx': 7}, result: {'test_loss': 0.1589672565460205} ______ __ ______ __ _ __ / ____/___ ______/ /_/ ____/____/ /_(_)___ ___ ____ _/ /_____ _____ / /_ / __ `/ ___/ __/ __/ / ___/ __/ / __ `__ \/ __ `/ __/ __ \/ ___/ / __/ / /_/ (__ ) /_/ /___(__ ) /_/ / / / / / / /_/ / /_/ /_/ / / /_/ \__,_/____/\__/_____/____/\__/_/_/ /_/ /_/\__,_/\__/\____/_/ FastEstimator-Warn: No ModelSaver Trace detected. Models will not be saved. FastEstimator-Start: step: 1; logging_interval: 100; num_device: 0; FastEstimator-Train: step: 1; ce: 2.3077712; FastEstimator-Train: step: 100; ce: 0.31761616; steps/sec: 28.59; FastEstimator-Train: step: 200; ce: 0.16418545; steps/sec: 27.36; FastEstimator-Train: step: 300; ce: 0.108605824; steps/sec: 26.95; FastEstimator-Train: step: 400; ce: 0.082405984; steps/sec: 26.41; FastEstimator-Train: step: 500; ce: 0.04901053; steps/sec: 25.47; FastEstimator-Train: step: 500; epoch: 1; epoch_time: 19.18 sec; FastEstimator-Finish: step: 500; model_lr: 0.001; total_time: 19.2 sec; FastEstimator-Test: step: 500; epoch: 1; ce: 0.06811151; FastEstimator-Search: Evaluated {'batch_size': 64, 'lr': 0.001, 'choices': 'Brightness', 'search_idx': 8}, result: {'test_loss': 0.06811150908470154}
{'param': {'batch_size': 64, 'lr': 0.01, 'choices': 'Brightness', 'search_idx': 6}, 'result': {'test_loss': 0.0622062087059021}}
Search Visualization¶
Visualization of grid search with single hyperparameter:
from fastestimator.search.visualize import visualize_search
visualize_search(search=mnist_grid_search_single)
Visualization of grid search with two hyperparameters:
visualize_search(search=mnist_grid_search_double)
Visualization of grid search with more than 2 hyperparameters:
visualize_search(search=mnist_grid_search_multi)
Example 2: RUA Augmentation via Golden-Section Search¶
In this example, we will use a built-in augmentation NumpyOp - RUA - and find the optimial level between 0 to 30 using Golden-Section
search. The test result will be evaluated on the ciFAIR10 dataset after 500 steps of training.
import tensorflow as tf
import fastestimator as fe
from fastestimator.architecture.tensorflow import LeNet
from fastestimator.dataset.data import cifair10
from fastestimator.op.numpyop.univariate import ExpandDims, Minmax, RUA
from fastestimator.op.tensorop.loss import CrossEntropy
from fastestimator.op.tensorop.model import ModelOp, UpdateOp
def get_estimator(level):
train_data, test_data = cifair10.load_data()
pipeline = fe.Pipeline(train_data=train_data,
test_data=test_data,
batch_size=64,
ops=[RUA(level=level, inputs="x", outputs="x", mode="train"),
Minmax(inputs="x", outputs="x")],
num_process=0)
model = fe.build(model_fn=lambda: LeNet(input_shape=(32, 32, 3)), optimizer_fn="adam")
network = fe.Network(ops=[
ModelOp(model=model, inputs="x", outputs="y_pred"),
CrossEntropy(inputs=("y_pred", "y"), outputs="ce"),
UpdateOp(model=model, loss_name="ce")
])
estimator = fe.Estimator(pipeline=pipeline,
network=network,
epochs=1,
train_steps_per_epoch=500)
return estimator
def eval_fn(search_idx, level):
est = get_estimator(level)
est.fit(warmup=False)
hist = est.test(summary="myexp")
loss = float(hist.history["test"]["ce"][500])
return {"test_loss": loss}
cifair10_gs_search = GoldenSection(eval_fn=eval_fn, x_min=0, x_max=30, max_iter=5, best_mode="min", optimize_field="test_loss")
cifair10_gs_search.fit()
______ __ ______ __ _ __ / ____/___ ______/ /_/ ____/____/ /_(_)___ ___ ____ _/ /_____ _____ / /_ / __ `/ ___/ __/ __/ / ___/ __/ / __ `__ \/ __ `/ __/ __ \/ ___/ / __/ / /_/ (__ ) /_/ /___(__ ) /_/ / / / / / / /_/ / /_/ /_/ / / /_/ \__,_/____/\__/_____/____/\__/_/_/ /_/ /_/\__,_/\__/\____/_/ FastEstimator-Warn: No ModelSaver Trace detected. Models will not be saved. FastEstimator-Start: step: 1; logging_interval: 100; num_device: 0; FastEstimator-Train: step: 1; ce: 2.3069172; FastEstimator-Train: step: 100; ce: 1.9433358; steps/sec: 15.57; FastEstimator-Train: step: 200; ce: 1.8442394; steps/sec: 15.36; FastEstimator-Train: step: 300; ce: 1.7987336; steps/sec: 14.66; FastEstimator-Train: step: 400; ce: 1.827171; steps/sec: 14.63; FastEstimator-Train: step: 500; ce: 1.6530949; steps/sec: 14.76; FastEstimator-Train: step: 500; epoch: 1; epoch_time: 34.03 sec; FastEstimator-Finish: step: 500; model_lr: 0.001; total_time: 34.05 sec; FastEstimator-Test: step: 500; epoch: 1; ce: 1.497365; FastEstimator-Search: Evaluated {'level': 11, 'search_idx': 1}, result: {'test_loss': 1.4973649978637695} ______ __ ______ __ _ __ / ____/___ ______/ /_/ ____/____/ /_(_)___ ___ ____ _/ /_____ _____ / /_ / __ `/ ___/ __/ __/ / ___/ __/ / __ `__ \/ __ `/ __/ __ \/ ___/ / __/ / /_/ (__ ) /_/ /___(__ ) /_/ / / / / / / /_/ / /_/ /_/ / / /_/ \__,_/____/\__/_____/____/\__/_/_/ /_/ /_/\__,_/\__/\____/_/ FastEstimator-Warn: No ModelSaver Trace detected. Models will not be saved. FastEstimator-Start: step: 1; logging_interval: 100; num_device: 0; FastEstimator-Train: step: 1; ce: 2.3124037; FastEstimator-Train: step: 100; ce: 2.1659584; steps/sec: 14.13; FastEstimator-Train: step: 200; ce: 2.0272436; steps/sec: 13.1; FastEstimator-Train: step: 300; ce: 1.8795973; steps/sec: 13.62; FastEstimator-Train: step: 400; ce: 1.8213081; steps/sec: 13.33; FastEstimator-Train: step: 500; ce: 1.6495254; steps/sec: 13.51; FastEstimator-Train: step: 500; epoch: 1; epoch_time: 37.59 sec; FastEstimator-Finish: step: 500; model_lr: 0.001; total_time: 37.6 sec; FastEstimator-Test: step: 500; epoch: 1; ce: 1.5087045; FastEstimator-Search: Evaluated {'level': 18, 'search_idx': 2}, result: {'test_loss': 1.5087045431137085} ______ __ ______ __ _ __ / ____/___ ______/ /_/ ____/____/ /_(_)___ ___ ____ _/ /_____ _____ / /_ / __ `/ ___/ __/ __/ / ___/ __/ / __ `__ \/ __ `/ __/ __ \/ ___/ / __/ / /_/ (__ ) /_/ /___(__ ) /_/ / / / / / / /_/ / /_/ /_/ / / /_/ \__,_/____/\__/_____/____/\__/_/_/ /_/ /_/\__,_/\__/\____/_/ FastEstimator-Warn: No ModelSaver Trace detected. Models will not be saved. FastEstimator-Start: step: 1; logging_interval: 100; num_device: 0; FastEstimator-Train: step: 1; ce: 2.3144321; FastEstimator-Train: step: 100; ce: 1.9611168; steps/sec: 17.91; FastEstimator-Train: step: 200; ce: 1.7303221; steps/sec: 15.36; FastEstimator-Train: step: 300; ce: 1.8715479; steps/sec: 15.63; FastEstimator-Train: step: 400; ce: 1.8697963; steps/sec: 15.95; FastEstimator-Train: step: 500; ce: 1.7709255; steps/sec: 15.99; FastEstimator-Train: step: 500; epoch: 1; epoch_time: 31.61 sec; FastEstimator-Finish: step: 500; model_lr: 0.001; total_time: 31.62 sec; FastEstimator-Test: step: 500; epoch: 1; ce: 1.4535282; FastEstimator-Search: Evaluated {'level': 7, 'search_idx': 3}, result: {'test_loss': 1.4535281658172607} ______ __ ______ __ _ __ / ____/___ ______/ /_/ ____/____/ /_(_)___ ___ ____ _/ /_____ _____ / /_ / __ `/ ___/ __/ __/ / ___/ __/ / __ `__ \/ __ `/ __/ __ \/ ___/ / __/ / /_/ (__ ) /_/ /___(__ ) /_/ / / / / / / /_/ / /_/ /_/ / / /_/ \__,_/____/\__/_____/____/\__/_/_/ /_/ /_/\__,_/\__/\____/_/ FastEstimator-Warn: No ModelSaver Trace detected. Models will not be saved. FastEstimator-Start: step: 1; logging_interval: 100; num_device: 0; FastEstimator-Train: step: 1; ce: 2.3262901; FastEstimator-Train: step: 100; ce: 1.8632274; steps/sec: 21.01; FastEstimator-Train: step: 200; ce: 1.8241731; steps/sec: 18.28; FastEstimator-Train: step: 300; ce: 1.5723119; steps/sec: 17.2; FastEstimator-Train: step: 400; ce: 1.5143611; steps/sec: 16.94; FastEstimator-Train: step: 500; ce: 1.4949286; steps/sec: 16.51; FastEstimator-Train: step: 500; epoch: 1; epoch_time: 28.6 sec; FastEstimator-Finish: step: 500; model_lr: 0.001; total_time: 28.62 sec; FastEstimator-Test: step: 500; epoch: 1; ce: 1.4889473; FastEstimator-Search: Evaluated {'level': 4, 'search_idx': 4}, result: {'test_loss': 1.4889472723007202} ______ __ ______ __ _ __ / ____/___ ______/ /_/ ____/____/ /_(_)___ ___ ____ _/ /_____ _____ / /_ / __ `/ ___/ __/ __/ / ___/ __/ / __ `__ \/ __ `/ __/ __ \/ ___/ / __/ / /_/ (__ ) /_/ /___(__ ) /_/ / / / / / / /_/ / /_/ /_/ / / /_/ \__,_/____/\__/_____/____/\__/_/_/ /_/ /_/\__,_/\__/\____/_/ FastEstimator-Warn: No ModelSaver Trace detected. Models will not be saved. FastEstimator-Start: step: 1; logging_interval: 100; num_device: 0; FastEstimator-Train: step: 1; ce: 2.2915616; FastEstimator-Train: step: 100; ce: 1.9557471; steps/sec: 17.23; FastEstimator-Train: step: 200; ce: 1.7439098; steps/sec: 15.71; FastEstimator-Train: step: 300; ce: 1.9362915; steps/sec: 15.4; FastEstimator-Train: step: 400; ce: 1.6905106; steps/sec: 15.72; FastEstimator-Train: step: 500; ce: 1.5294566; steps/sec: 15.53; FastEstimator-Train: step: 500; epoch: 1; epoch_time: 32.08 sec; FastEstimator-Finish: step: 500; model_lr: 0.001; total_time: 32.1 sec; FastEstimator-Test: step: 500; epoch: 1; ce: 1.468676; FastEstimator-Search: Evaluated {'level': 8, 'search_idx': 5}, result: {'test_loss': 1.468675971031189} ______ __ ______ __ _ __ / ____/___ ______/ /_/ ____/____/ /_(_)___ ___ ____ _/ /_____ _____ / /_ / __ `/ ___/ __/ __/ / ___/ __/ / __ `__ \/ __ `/ __/ __ \/ ___/ / __/ / /_/ (__ ) /_/ /___(__ ) /_/ / / / / / / /_/ / /_/ /_/ / / /_/ \__,_/____/\__/_____/____/\__/_/_/ /_/ /_/\__,_/\__/\____/_/ FastEstimator-Warn: No ModelSaver Trace detected. Models will not be saved. FastEstimator-Start: step: 1; logging_interval: 100; num_device: 0; FastEstimator-Train: step: 1; ce: 2.317153; FastEstimator-Train: step: 100; ce: 2.0944784; steps/sec: 19.89; FastEstimator-Train: step: 200; ce: 1.6180917; steps/sec: 17.54; FastEstimator-Train: step: 300; ce: 1.730228; steps/sec: 17.11; FastEstimator-Train: step: 400; ce: 1.5884643; steps/sec: 16.26; FastEstimator-Train: step: 500; ce: 1.6063898; steps/sec: 16.33; FastEstimator-Train: step: 500; epoch: 1; epoch_time: 29.45 sec; FastEstimator-Finish: step: 500; model_lr: 0.001; total_time: 29.47 sec; FastEstimator-Test: step: 500; epoch: 1; ce: 1.5012085; FastEstimator-Search: Evaluated {'level': 5, 'search_idx': 6}, result: {'test_loss': 1.5012085437774658} ______ __ ______ __ _ __ / ____/___ ______/ /_/ ____/____/ /_(_)___ ___ ____ _/ /_____ _____ / /_ / __ `/ ___/ __/ __/ / ___/ __/ / __ `__ \/ __ `/ __/ __ \/ ___/ / __/ / /_/ (__ ) /_/ /___(__ ) /_/ / / / / / / /_/ / /_/ /_/ / / /_/ \__,_/____/\__/_____/____/\__/_/_/ /_/ /_/\__,_/\__/\____/_/ FastEstimator-Warn: No ModelSaver Trace detected. Models will not be saved. FastEstimator-Start: step: 1; logging_interval: 100; num_device: 0; FastEstimator-Train: step: 1; ce: 2.3051028; FastEstimator-Train: step: 100; ce: 1.8751833; steps/sec: 19.09; FastEstimator-Train: step: 200; ce: 1.884644; steps/sec: 16.76; FastEstimator-Train: step: 300; ce: 1.774396; steps/sec: 16.69; FastEstimator-Train: step: 400; ce: 1.6647091; steps/sec: 16.47; FastEstimator-Train: step: 500; ce: 1.5038598; steps/sec: 16.44; FastEstimator-Train: step: 500; epoch: 1; epoch_time: 29.98 sec; FastEstimator-Finish: step: 500; model_lr: 0.001; total_time: 29.99 sec; FastEstimator-Test: step: 500; epoch: 1; ce: 1.4427035; FastEstimator-Search: Evaluated {'level': 6, 'search_idx': 7}, result: {'test_loss': 1.4427034854888916} FastEstimator-Search: Golden Section Search Finished, best parameters: {'level': 6, 'search_idx': 7}, best result: {'test_loss': 1.4427034854888916}
In this example, the optimial level we found is 5. We can then train the model again using level=5
to get the final model. In a real use case you will want to perform parameter search on a held-out evaluation set, and test the best parameters on the test set.