Pro Deep Learning with TensorFlow 2.0: A Mathematical Approach to Advanced Artificial Intelligence in Python
- Length: 672 pages
- Edition: 2
- Language: English
- Publisher: Apress
- Publication Date: 2023-01-15
- ISBN-10: 1484289307
- ISBN-13: 9781484289303
- Sales Rank: #197796 (See Top 100 Books)
This book builds upon the foundations established in its first edition, with updated chapters and the latest code implementations to bring it up to date with Tensorflow 2.0.
Pro Deep Learning with TensorFlow 2.0 begins with the mathematical and core technical foundations of deep learning. Next, you will learn about convolutional neural networks, including new convolutional methods such as dilated convolution, depth-wise separable convolution, and their implementation. You’ll then gain an understanding of natural language processing in advanced network architectures such as transformers and various attention mechanisms relevant to natural language processing and neural networks in general. As you progress through the book, you’ll explore unsupervised learning frameworks that reflect the current state of deep learning methods, such as autoencoders and variational autoencoders. The final chapter covers the advanced topic of generative adversarial networks and their variants, such as cycle consistency GANs and graph neural network techniques such as graph attention networks and GraphSAGE.
Upon completing this book, you will understand the mathematical foundations and concepts of deep learning, and be able to use the prototypes demonstrated to build new deep learning applications.
What You Will Learn
- Understand full-stack deep learning using TensorFlow 2.0
- Gain an understanding of the mathematical foundations of deep learning
- Deploy complex deep learning solutions in production using TensorFlow 2.0
- Understand generative adversarial networks, graph attention networks, and GraphSAGE
Who This Book Is For:
Data scientists and machine learning professionals, software developers, graduate students, and open source enthusiasts.
Table of Contents About the Author About the Technical Reviewer Introduction Chapter 1: Mathematical Foundations Linear Algebra Vector Scalar Matrix Tensor Matrix Operations and Manipulations Addition of Two Matrices Subtraction of Two Matrices Product of Two Matrices Transpose of a Matrix Dot Product of Two Vectors Matrix Working on a Vector Linear Independence of Vectors Rank of a Matrix Identity Matrix or Operator Determinant of a Matrix Interpretation of Determinant Inverse of a Matrix Norm of a Vector Pseudo-Inverse of a Matrix Unit Vector in the Direction of a Specific Vector Projection of a Vector in the Direction of Another Vector Eigen Vectors Characteristic Equation of a Matrix Power Iteration Method for Computing Eigen Vector Calculus Differentiation Gradient of a Function Successive Partial Derivatives Hessian Matrix of a Function Maxima and Minima of Functions Rules for Maxima and Minima for a Univariate Function Local Minima and Global Minima Positive Semi-definite and Positive Definite Convex Set Convex Function Non-convex Function Multivariate Convex and Non-convex Functions Examples Taylor Series Probability Unions, Intersection, and Conditional Probability Chain Rule of Probability for Intersection of Event Mutually Exclusive Events Independence of Events Conditional Independence of Events Bayes Rule Probability Mass Function Probability Density Function Expectation of a Random Variable Variance of a Random Variable Skewness and Kurtosis Covariance Correlation Coefficient Some Common Probability Distribution Uniform Distribution Normal Distribution Multivariate Normal Distribution Bernoulli Distribution Binomial Distribution Poisson Distribution Beta Distribution Dirichlet Distribution Gamma Distribution Likelihood Function Maximum Likelihood Estimate Hypothesis Testing and p Value Formulation of Machine-Learning Algorithm and Optimization Techniques Supervised Learning Linear Regression as a Supervised Learning Method Linear Regression Through Vector Space Approach Classification Hyperplanes and Linear Classifiers Unsupervised Learning Reinforcement Learning Optimization Techniques for Machine-Learning Gradient Descent Gradient Descent for a Multivariate Cost Function Contour Plot and Contour Lines Steepest Descent Stochastic Gradient Descent Newton’s Method Linear Curve Negative Curvature Positive Curvature Constrained Optimization Problem A Few Important Topics in Machine Learning Dimensionality-Reduction Methods Principal Component Analysis When Will PCA Be Useful in Data Reduction? How Do You Know How Much Variance Is Retained by the Selected Principal Components? Singular Value Decomposition Regularization Regularization Viewed as a Constraint Optimization Problem Bias and Variance Trade-Off Summary Chapter 2: Introduction to Deep-Learning Concepts and TensorFlow Deep Learning and Its Evolution Perceptrons and Perceptron Learning Algorithm Geometrical Interpretation of Perceptron Learning Limitations of Perceptron Learning Need for Nonlinearity Hidden-Layer Perceptrons’ Activation Function for Nonlinearity Different Activation Functions for a Neuron/Perceptron Linear Activation Function Binary Threshold Activation Function Sigmoid Activation Function SoftMax Activation Function Rectified Linear Unit (ReLU) Activation Function Tanh Activation Function SoftPlus Activation Function Swish Activation Function Learning Rule for Multi-layer Perceptron Network Backpropagation for Gradient Computation Generalizing the Backpropagation Method for Gradient Computation Deep Learning vs. Traditional Methods TensorFlow Common Deep-Learning Packages TensorFlow Installation TensorFlow Basics for Development Gradient-Descent Optimization Methods from a Deep-Learning Perspective Elliptical Contours Non-convexity of Cost Functions Saddle Points in the High-Dimensional Cost Functions Learning Rate in Mini-Batch Approach to Stochastic Gradient Descent Optimizers in TensorFlow GradientDescentOptimizer Usage AdagradOptimizer Usage RMSprop Usage AdadeltaOptimizer Usage AdamOptimizer Usage MomentumOptimizer and Nesterov Algorithm Usage Epoch, Number of Batches, and Batch Size XOR Implementation Using TensorFlow TensorFlow Computation Graph for XOR Network Linear Regression in TensorFlow Multiclass Classification with SoftMax Function Using Full-Batch Gradient Descent Multiclass Classification with SoftMax Function Using Stochastic Gradient Descent GPU TPU Summary Chapter 3: Convolutional Neural Networks Convolution Operation Linear Time Invariant (LTI)/Linear Shift Invariant (LSI) Systems Convolution for Signals in One Dimension Analog and Digital Signals 2D and 3D Signals 2D Convolution Two-Dimensional Unit Step Function 2D Convolution of a Signal with an LSI System Unit Step Response 2D Convolution of an Image to Different LSI System Responses Common Image-Processing Filters Mean Filter Median Filter Gaussian Filter Gradient-Based Filters Sobel Edge-Detection Filter Identity Transform Convolution Neural Networks Components of Convolution Neural Networks Input Layer Convolution Layer TensorFlow Usage Pooling Layer TensorFlow Usage Backpropagation Through the Convolutional Layer Backpropagation Through the Pooling Layers Weight Sharing Through Convolution and Its Advantages Translation Equivariance Translation Invariance Due to Pooling Dropout Layers and Regularization Convolutional Neural Network for Digit Recognition on the MNIST Dataset Convolutional Neural Network for Solving Real-World Problems Batch Normalization Different Architectures in Convolutional Neural Networks LeNet AlexNet VGG16 ResNet Transfer Learning Guidelines for Using Transfer Learning Transfer Learning with Google’s InceptionV3 Transfer Learning with Pretrained VGG16 Dilated Convolution Depthwise Separable Convolution Summary Chapter 4: Natural Language Processing Vector Space Model (VSM) Vector Representation of Words Word2Vec Continuous Bag of Words (CBOW) Continuous Bag of Words Implementation in TensorFlow Skip-Gram Model for Word Embedding Skip-Gram Implementation in TensorFlow Global Co-occurrence Statistics-Based Word Vectors GloVe Word Analogy with Word Vectors Introduction to Recurrent Neural Networks Language Modeling Predicting the Next Word in a Sentence Through RNN Versus Traditional Methods Backpropagation Through Time (BPTT) Vanishing- and Exploding-Gradient Problem in RNN Solution to Vanishing- and Exploding-Gradient Problem in RNNs Gradient Clipping Smart Initialization of the Memory-to-Memory Weight Connection Matrix and ReLU Units Long Short-Term Memory (LSTM) LSTM in Reducing Exploding- and Vanishing-Gradient Problems MNIST Digit Identification in TensorFlow Using Recurrent Neural Networks Next-Word Prediction and Sentence Completion in TensorFlow Using Recurrent Neural Networks Gated Recurrent Unit (GRU) Bidirectional RNN Neural Machine Translation Architecture of the Neural Machine Translation Model Using Seq2Seq Limitation of the Seq2Seq Model for Machine Translation Attention Scaled Dot Product Attention Multihead Attention Transformers Encoder Decoder Positional Encoding Final Output Summary Chapter 5: Unsupervised Learning with Restricted Boltzmann Machines and Autoencoders Boltzmann Distribution Bayesian Inference: Likelihood, Priors, and Posterior Probability Distribution Markov Chain Monte Carlo Methods for Sampling Metropolis Algorithm Restricted Boltzmann Machines Training a Restricted Boltzmann Machine Gibbs Sampling Block Gibbs Sampling Burn-in Period and Generating Samples in Gibbs Sampling Using Gibbs Sampling in Restricted Boltzmann Machines Contrastive Divergence A Restricted Boltzmann Implementation in TensorFlow Collaborative Filtering Using Restricted Boltzmann Machines Deep-Belief Networks (DBNs) Autoencoders Feature Learning Through Autoencoders for Supervised Learning Kullback-Leibler (KL) Divergence Sparse Autoencoders Sparse Autoencoder Implementation in TensorFlow Denoising Autoencoder A Denoising Autoencoder Implementation in TensorFlow Variational Autoencoders Variational Inference Variational Autoencoder Objective from ELBO Implementation Details of the Variational Autoencoder Implementation of Variational Autoencoder PCA and ZCA Whitening Summary Chapter 6: Advanced Neural Networks Image Segmentation Binary Thresholding Method Based on Histogram of Pixel Intensities Otsu’s Method Watershed Algorithm for Image Segmentation Image Segmentation Using K-means Clustering Semantic Segmentation Sliding-Window Approach Fully Convolutional Network (FCN) Fully Convolutional Network with Downsampling and Upsampling Unpooling Max Unpooling Transpose Convolution U-Net Semantic Segmentation in TensorFlow with Fully Connected Neural Networks Image Classification and Localization Network Object Detection R-CNN Fast and Faster R-CNN Generative Adversarial Networks Maximin and Minimax Problem Zero-Sum Game Minimax and Saddle Points GAN Cost Function and Training Vanishing Gradient for the Generator GAN Learning from an F-Divergence Perspective TensorFlow Implementation of a GAN Network GAN’s Similarity to Variational Autoencoder CycleGAN CycleGAN Implementation in TensorFlow Geometric Deep Learning and Graph Neural Networks Manifolds Graphs Adjacency Matrix Connectedness of a Graph Vertex Degree Laplacian Matrix of a Graph Function of the Laplacian Matrix Different Versions of Laplacian Matrix Different Problem Formulations in Geometric Learning Multidimensional Scaling Autoencoders Locally Linear Embedding Spectral Embedding Node2Vec Node2Vec Implementation in TensorFlow Graph Convolution Networks Spectral Filters in Graph Convolution Spectral CNN K-Localized Spectral Filter ChebNet Graph Convolution Network (GCN) Implementation of Graph Classification Using GCN GraphSage Implementation of Node Classification Using GraphSage Graph Attention Networks Summary Index
Donate to keep this site alive
How to download source code?
1. Go to: https://github.com/Apress
2. In the Find a repository… box, search the book title: Pro Deep Learning with TensorFlow 2.0: A Mathematical Approach to Advanced Artificial Intelligence in Python
, sometime you may not get the results, please search the main title.
3. Click the book title in the search results.
3. Click Code to download.
1. Disable the AdBlock plugin. Otherwise, you may not get any links.
2. Solve the CAPTCHA.
3. Click download link.
4. Lead to download server to download.