Automated Deep Learning Using Neural Network Intelligence: Develop and Design PyTorch and TensorFlow Models Using Python
- Length: 401 pages
- Edition: 1
- Language: English
- Publisher: Apress
- Publication Date: 2022-07-05
- ISBN-10: 1484281489
- ISBN-13: 9781484281482
- Sales Rank: #0 (See Top 100 Books)
Optimize, develop, and design PyTorch and TensorFlow models for a specific problem using the Microsoft Neural Network Intelligence (NNI) toolkit. This book includes practical examples illustrating automated deep learning approaches and provides techniques to facilitate your deep learning model development.
The first chapters of this book cover the basics of NNI toolkit usage and methods for solving hyper-parameter optimization tasks. You will understand the black-box function maximization problem using NNI, and know how to prepare a TensorFlow or PyTorch model for hyper-parameter tuning, launch an experiment, and interpret the results. The book dives into optimization tuners and the search algorithms they are based on: Evolution search, Annealing search, and the Bayesian Optimization approach. The Neural Architecture Search is covered and you will learn how to develop deep learning models from scratch. Multi-trial and one-shot searching approaches of automatic neural network design are presented. The book teaches you how to construct a search space and launch an architecture search using the latest state-of-the-art exploration strategies: Efficient Neural Architecture Search (ENAS) and Differential Architectural Search (DARTS). You will learn how to automate the construction of a neural network architecture for a particular problem and dataset. The book focuses on model compression and feature engineering methods that are essential in automated deep learning. It also includes performance techniques that allow the creation of large-scale distributive training platforms using NNI.
After reading this book, you will know how to use the full toolkit of automated deep learning methods. The techniques and practical examples presented in this book will allow you to bring your neural network routines to a higher level.
What You Will Learn
- Know the basic concepts of optimization tuners, search space, and trials
- Apply different hyper-parameter optimization algorithms to develop effective neural networks
- Construct new deep learning models from scratch
- Execute the automated Neural Architecture Search to create state-of-the-art deep learning models
- Compress the model to eliminate unnecessary deep learning layers
Who This Book Is For
Intermediate to advanced data scientists and machine learning engineers involved in deep learning and practical neural network development
Table of Contents About the Author About the Technical Reviewer Introduction Source Code Listings Chapter 1: Introduction to Neural Network Intelligence What Is Automated Deep Learning? No Free Lunch Theorem Injecting New Deep Learning Techniques into Existing Model Adjusting Model to a New Dataset Creating a New Model from Scratch Reinventing the Wheel Working with Source Code Neural Network Intelligence Installation Install Docker Search Space, Tuner, and Trial Black-Box Function Optimization Web User Interface Overview Page Trials Details Page NNI Command Line NNI Experiment Configuration Embedded NNI Troubleshooting TensorFlow and PyTorch Summary Chapter 2: Hyperparameter Optimization What Is Hyperparameter? Layer Hyperparameter Training Hyperparameter Feature Hyperparameter Design Hyperparameter Search Space choice randomint uniform quniform loguniform qloguniform normal qnormal lognormal qlognormal Tuners Random Search Tuner Grid Search Tuner Organizing Experiment Optimizing LeNet for MNIST Problem TensorFlow LeNet Implementation PyTorch LeNet Implementation Performing LeNet HPO Experiment Upgrading LeNet with ReLU and Dropout TensorFlow LeNet Upgrade Implementation PyTorch LeNet Upgrade Implementation Performing LeNet Upgrade HPO Experiment From LeNet to AlexNet TensorFlow LeNet Evolution Implementation PyTorch LeNet Evolution Implementation Performing LeNet Evolution HPO Experiment Summary Chapter 3: Hyperparameter Optimization Under Shell Tuners Evolution Tuner Anneal Tuner Sequential Model-Based Optimization Tuners Tree-Structured Parzen Estimator Tuner Gaussian Process Tuner Which Tuner to Choose? Custom Tuner Tuner Internals New Evolution Custom Tuner Early Stopping Median Stop Curve Fitting Risk to Stop a Good Trial Searching for Optimal Functional Pipeline and Classical AutoML Problem Operators Search Space Model Tuner Experiment Limits of HPO Applying to Neural Architecture Search Hyperparameters for Hyperparameter Optimization Summary Chapter 4: Multi-trial Neural Architecture Search Neural Architecture As Data Flow Graph Neural Architecture Search Using Retiarii (PyTorch) Introduction to NAS Using Retiarii Retiarii Framework Base Model Mutators LayerChoice ValueChoice InputChoice Repeat Labeling Example Evaluators Exploration Strategies Random Strategy Grid Search Regularized Evolution TPE Strategy RL Strategy Experiment CIFAR-10 LeNet NAS CIFAR-10 ResNet NAS Classic Neural Architecture Search (TensorFlow) Base Model Mutators Trial Search Space Search Strategy Experiment Summary Chapter 5: One-Shot Neural Architecture Search One-Shot NAS in Action Supernet Architecture One-Shot Algorithms Efficient Neural Architecture Search (ENAS) TensorFlow ENAS Implementation PyTorch ENAS Implementation Differentiable Architecture Search (DARTS) GeneralSupernet Solving CIFAR-10 Training GeneralSupernet Using TensorFlow and ENAS Training GeneralSupernet Using PyTorch and DARTS HPO vs. Multi-trial NAS vs. One-Shot NAS Summary Chapter 6: Model Pruning What Is Model Pruning? LeNet Model Pruning One-Shot Pruners Pruner Configuration Level Pruner FPGM Pruner L1Norm and L2Norm Pruners Iterative Pruners Linear Pruner AGP Pruner Iterative Pruner Configuration Iterative Pruning Scenarios Best Accuracy Under Size Threshold Scenario Minimal Size Above Accuracy Threshold Scenario Summary Chapter 7: NNI Recipes Speed Up Trials Start–Stop–Resume Continue Finished Experiment NNI and TensorBoard Move Experiment to Another Server Scaling Experiments Shared Storage One-Shot NAS with Checkpoints and TensorBoard Summary Index
Donate to keep this site alive
How to download source code?
1. Go to: https://github.com/Apress
2. In the Find a repository… box, search the book title: Automated Deep Learning Using Neural Network Intelligence: Develop and Design PyTorch and TensorFlow Models Using Python
, sometime you may not get the results, please search the main title.
3. Click the book title in the search results.
3. Click Code to download.
1. Disable the AdBlock plugin. Otherwise, you may not get any links.
2. Solve the CAPTCHA.
3. Click download link.
4. Lead to download server to download.