Neural Architecture Search
To help the researchers to do experiments on neural architecture search (NAS), we have implemented several baseline methods using the Auto-Keras framework. The implementations are easy since only the core part of the search algorithm is needed. All other parts of NAS (e.g. data structures for storing neural architectures, training of the neural networks) are done by the Auto-Keras framework.
Why implement NAS papers in Auto-Keras?
The NAS papers usually evaluate their work with the same dataset (e.g. CIFAR10), but they are not directly comparable because of the data preparation and training process are different, the influence of which are significant enough to change the rankings of these NAS methods.
We have implemented some of the NAS methods in the framework. More state-of-the-art methods are in progress. There are three advantages of implementing the NAS methods in Auto-Keras. First, it fairly compares the NAS methods independent from other factors (e.g. the choice of optimizer, data augmentation). Second, researchers can easily change the experiment datasets used for NAS. Many of the currently available NAS implementations couple too much with the dataset used, which makes it hard to replace the original dataset with a new one. Third, it saves the effort of finding and running code from different sources. Different code may have different requirements of dependencies and environments, which may conflict with each other.
Baseline methods implemented
We have implemented four NAS baseline methods:
random search: we explore the search space via morphing the network architectures randomly, so the actual performance of the generated neural architecture has no effect on later search.
grid search: we manually specified subset of the hyperparameter space to search, i.e., the number of layers and the width of the layers are predefined.
greed search: we explore the search space in a greedy way. The "greedy" here means the base architecture for the next iteration of search is chosen from those generated by current iteration, the one that have best performance on the training/validation set in our implementation.
How to run the baseline methods?
examples/nas/cifar10_tutorial.py for more details.
How to implement your own search?
To implement your own NAS searcher, you need to implement your own searcher class YOUR_SEARCHER, which is derived
Searcher class. For your
YOUR_SEARCHER class, you must provide implementation of the two abstract method:
generate(self, multiprocessing_queue), which is invoked to generate the next neural architecture. The return value of the generate function should be two elements. The first one is the generated graph. The second one is any other information you want to pass to the update function. If you have multiple values to pass, you need to put them into one tuple. If you don't have any value to pass, you can just put None.
update(self, other_info, model_id, graph, metric_value), which is invoked to update the controller with evaluation result of a neural architecture. The graph and other_info in the parameters are the corresponding return value of the generate function. There is no required return value for this function.
You can refer to the default
as an example.
function returns the generated graph and the father ID of the graph in the search tree. Then when the generated model
training, the father ID
and ID (
model_id) and instance (
graph) and metric value (
metric_value) of the model are passed to update
update the controller
You can find more example here.
You are welcome to implement your own method for NAS in our framework. If it works well, we are happy to merge it into our repo.