Learning Deep Learning: Theory and Practice of Neural Networks, Computer Vision, NLP, and Transformers using TensorFlow
NVIDIA’s Full-Color Guide to Deep Learning with TensorFlow: All You Need to Get Started and Get Results
Deep learning is a key component of today’s exciting advances in machine learning and artificial intelligence. Learning Deep Learning is a complete guide to deep learning with TensorFlow, the #1 Python library for building these breakthrough applications. Illuminating both the core concepts and the hands-on programming techniques needed to succeed, this book is ideal for developers, data scientists, analysts, and others–including those with no prior machine learning or statistics experience.
After introducing the essential building blocks of deep neural networks, Magnus Ekman shows how to use fully connected feedforward networks and convolutional networks to solve real problems, such as predicting housing prices or classifying images. You’ll learn how to represent words from a natural language, capture semantics, and develop a working natural language translator. With that foundation in place, Ekman then guides you through building a system that inputs images and describes them in natural language.
Throughout, Ekman provides concise, well-annotated code examples using TensorFlow and the Keras API. (For comparison and easy migration between frameworks, complementary PyTorch examples are provided online.) He concludes by previewing trends in deep learning, exploring important ethical issues, and providing resources for further learning.
- Master core concepts: perceptrons, gradient-based learning, sigmoid neurons, and back propagation
- See how frameworks make it easier to develop more robust and useful neural networks
- Discover how convolutional neural networks (CNNs) revolutionize classification and analysis
- Use recurrent neural networks (RNNs) to optimize for text, speech, and other variable-length sequences
- Master long short-term memory (LSTM) techniques for natural language generation and other applications
- Move further into natural language-processing (NLP), including understanding and translation
Cover Page About This eBook Halftitle Page Title Page Copyright Page Dedication Page Contents Foreword Foreword Preface What Is Deep Learning? Brief History of Deep Neural Networks Is This Book for You? Is DL Dangerous? Choosing a DL Framework Prerequisites for Learning DL About the Code Examples How to Read This Book Overview of Each Chapter and Appendix Acknowledgments About the Author Chapter 1. The Rosenblatt Perceptron Example of a Two-Input Perceptron The Perceptron Learning Algorithm Limitations of the Perceptron Combining Multiple Perceptrons Implementing Perceptrons with Linear Algebra Geometric Interpretation of the Perceptron Understanding the Bias Term Concluding Remarks on the Perceptron Chapter 2. Gradient-Based Learning Intuitive Explanation of the Perceptron Learning Algorithm Derivatives and Optimization Problems Solving a Learning Problem with Gradient Descent Constants and Variables in a Network Analytic Explanation of the Perceptron Learning Algorithm Geometric Description of the Perceptron Learning Algorithm Revisiting Different Types of Perceptron Plots Using a Perceptron to Identify Patterns Concluding Remarks on Gradient-Based Learning Chapter 3. Sigmoid Neurons and Backpropagation Modified Neurons to Enable Gradient Descent for Multilevel Networks Which Activation Function Should We Use? Function Composition and the Chain Rule Using Backpropagation to Compute the Gradient Backpropagation with Multiple Neurons per Layer Programming Example: Learning the XOR Function Network Architectures Concluding Remarks on Backpropagation Chapter 4. Fully Connected Networks Applied to Multiclass Classification Introduction to Datasets Used When Training Networks Training and Inference Extending the Network and Learning Algorithm to Do Multiclass Classification Network for Digit Classification Loss Function for Multiclass Classification Programming Example: Classifying Handwritten Digits Mini-Batch Gradient Descent Concluding Remarks on Multiclass Classification Chapter 5. Toward DL: Frameworks and Network Tweaks Programming Example: Moving to a DL Framework The Problem of Saturated Neurons and Vanishing Gradients Initialization and Normalization Techniques to Avoid Saturated Neurons Cross-Entropy Loss Function to Mitigate Effect of Saturated Output Neurons Different Activation Functions to Avoid Vanishing Gradient in Hidden Layers Variations on Gradient Descent to Improve Learning Experiment: Tweaking Network and Learning Parameters Hyperparameter Tuning and Cross-Validation Concluding Remarks on the Path Toward Deep Learning Chapter 6. Fully Connected Networks Applied to Regression Output Units The Boston Housing Dataset Programming Example: Predicting House Prices with a DNN Improving Generalization with Regularization Experiment: Deeper and Regularized Models for House Price Prediction Concluding Remarks on Output Units and Regression Problems Chapter 7. Convolutional Neural Networks Applied to Image Classification The CIFAR-10 Dataset Characteristics and Building Blocks for Convolutional Layers Combining Feature Maps into a Convolutional Layer Combining Convolutional and Fully Connected Layers into a Network Effects of Sparse Connections and Weight Sharing Programming Example: Image Classification with a Convolutional Network Concluding Remarks on Convolutional Networks Chapter 8. Deeper CNNs and Pretrained Models VGGNet GoogLeNet ResNet Programming Example: Use a Pretrained ResNet Implementation Transfer Learning Backpropagation for CNN and Pooling Data Augmentation as a Regularization Technique Mistakes Made by CNNs Reducing Parameters with Depthwise Separable Convolutions Striking the Right Network Design Balance with EfficientNet Concluding Remarks on Deeper CNNs Chapter 9. Predicting Time Sequences with Recurrent Neural Networks Limitations of Feedforward Networks Recurrent Neural Networks Mathematical Representation of a Recurrent Layer Combining Layers into an RNN Alternative View of RNN and Unrolling in Time Backpropagation Through Time Programming Example: Forecasting Book Sales Dataset Considerations for RNNs Concluding Remarks on RNNs Chapter 10. Long Short-Term Memory Keeping Gradients Healthy Introduction to LSTM Alternative View of LSTM Related Topics: Highway Networks and Skip Connections Concluding Remarks on LSTM Chapter 11. Text Autocompletion with LSTM and Beam Search Encoding Text Longer-Term Prediction and Autoregressive Models Beam Search Programming Example: Using LSTM for Text Autocompletion Bidirectional RNNs Different Combinations of Input and Output Sequences Concluding Remarks on Text Autocompletion with LSTM Chapter 12. Neural Language Models and Word Embeddings Introduction to Language Models and Their Use Cases Examples of Different Language Models Benefit of Word Embeddings and Insight into How They Work Word Embeddings Created by Neural Language Models Programming Example: Neural Language Model and Resulting Embeddings King – Man + Woman! = Queen King – Man + Woman ! = Queen Language Models, Word Embeddings, and Human Biases Related Topic: Sentiment Analysis of Text Concluding Remarks on Language Models and Word Embeddings Chapter 13. Word Embeddings from word2vec and GloVe Using word2vec to Create Word Embeddings Without a Language Model Additional Thoughts on word2vec word2vec in Matrix Form Wrapping Up word2vec Programming Example: Exploring Properties of GloVe Embeddings Concluding Remarks on word2vec and GloVe Chapter 14. Sequence-to-Sequence Networks and Natural Language Translation Encoder-Decoder Model for Sequence-to-Sequence Learning Introduction to the Keras Functional API Programming Example: Neural Machine Translation Experimental Results Properties of the Intermediate Representation Concluding Remarks on Language Translation Chapter 15. Attention and the Transformer Rationale Behind Attention Attention in Sequence-to-Sequence Networks Alternatives to Recurrent Networks Self-Attention Multi-head Attention The Transformer Concluding Remarks on the Transformer Chapter 16. One-to-Many Network for Image Captioning Extending the Image Captioning Network with Attention Programming Example: Attention-Based Image Captioning Concluding Remarks on Image Captioning Chapter 17. Medley of Additional Topics Autoencoders Multimodal Learning Multitask Learning Process for Tuning a Network Neural Architecture Search Concluding Remarks Chapter 18. Summary and Next Steps Things You Should Know by Now Ethical AI and Data Ethics Things You Do Not Yet Know Next Steps Appendix A. Linear Regression and Linear Classifiers Linear Regression as a Machine Learning Algorithm Computing Linear Regression Coefficients Classification with Logistic Regression Classifying XOR with a Linear Classifier Classification with Support Vector Machines Evaluation Metrics for a Binary Classifier Appendix B. Object Detection and Segmentation Object Detection Semantic Segmentation Instance Segmentation with Mask R-CNN Appendix C. Word Embeddings Beyond word2vec and GloVe Wordpieces FastText Character-Based Method ELMo Related Work Appendix D. GPT, BERT, and RoBERTa GPT BERT RoBERTa Historical Work Leading Up to GPT and BERT Other Models Based on the Transformer Appendix E. Newton-Raphson versus Gradient Descent Newton-Raphson Root-Finding Method Relationship Between Newton-Raphson and Gradient Descent Appendix F. Matrix Implementation of Digit Classification Network Single Matrix Mini-Batch Implementation Appendix G. Relating Convolutional Layers to Mathematical Convolution Appendix H. Gated Recurrent Units Alternative GRU Implementation Network Based on the GRU Appendix I. Setting Up a Development Environment Python Programming Environment Programming Examples Datasets Installing a DL Framework TensorFlow Specific Considerations Key Differences Between PyTorch and TensorFlow Appendix J. Cheat Sheets Works Cited Index Code Snippets
1. Disable the AdBlock plugin. Otherwise, you may not get any links.
2. Solve the CAPTCHA.
3. Click download link.
4. Lead to download server to download.