Unsupervised Learning Approaches for Dimensionality Reduction and Data Visualization
- Length: 174 pages
- Edition: 1
- Language: English
- Publisher: CRC Press
- Publication Date: 2021-09-02
- ISBN-10: 1032041013
- ISBN-13: 9781032041018
- Sales Rank: #0 (See Top 100 Books)
Unsupervised Learning Approaches for Dimensionality Reduction and Data Visualization describes such algorithms as Locally Linear Embedding (LLE), Laplacian Eigenmaps, Isomap, Semidefinite Embedding, and t-SNE to resolve the problem of dimensionality reduction in the case of non-linear relationships within the data. Underlying mathematical concepts, derivations, and proofs with logical explanations for these algorithms are discussed, including strengths and limitations. The book highlights important use cases of these algorithms and provides examples along with visualizations. Comparative study of the algorithms is presented to give a clear idea on selecting the best suitable algorithm for a given dataset for efficient dimensionality reduction and data visualization.
FEATURES
- Demonstrates how unsupervised learning approaches can be used for dimensionality reduction
- Neatly explains algorithms with a focus on the fundamentals and underlying mathematical concepts
- Describes the comparative study of the algorithms and discusses when and where each algorithm is best suitable for use
- Provides use cases, illustrative examples, and visualizations of each algorithm
- Helps visualize and create compact representations of high dimensional and intricate data for various real-world applications and data analysis
This book is aimed at professionals, graduate students, and researchers in Computer Science and Engineering, Data Science, Machine Learning, Computer Vision, Data Mining, Deep Learning, Sensor Data Filtering, Feature Extraction for Control Systems, and Medical Instruments Input Extraction.
Cover Half Title Title Page Copyright Page Dedication Contents Preface Authors Chapter 1: Introduction to Dimensionality Reduction 1.1. Introduction Chapter 2: Principal Component Analysis (PCA) 2.1. Explanation and Working 2.2. Advantages and Limitations 2.3. Use Cases 2.4. Examples and Tutorial References Chapter 3: Dual PCA 3.1. Explanation and Working 3.1.1. Project Data in P-dimensional Space 3.1.2. Reconstruct Training Data 3.1.3. Project Test Data in P-dimensional Space 3.1.4. Reconstruct Test Data References Chapter 4: Kernel PCA 4.1. Explanation and Working 4.1.1. Kernel Trick 4.2. Advantages and Limitations 4.2.1. Kernel PCA vs. PCA 4.3. Use Cases 4.4. Examples and Tutorial References Chapter 5: Canonical Correlation Analysis (CCA) 5.1. Explanation and Working 5.2. Advantages and Limitations of CCA 5.3. Use Cases and Examples References Chapter 6: Multidimensional Scaling (MDS) 6.1. Explanation and Working 6.2. Advantages and Limitations 6.3. Use Cases 6.4. Examples and Tutorial References Chapter 7: Isomap 7.1. Explanation and Working 7.1.1. Isomap Algorithm 7.2. Advantages and Limitations 7.3. Use Cases 7.4. Examples and Tutorial References Chapter 8: Random Projections 8.1. Explanation and Working 8.2. Advantages and Limitations 8.3. Use Cases 8.4. Examples and Tutorial References Chapter 9: Locally Linear Embedding 9.1. Explanation and Working 9.2. Advantages and Limitations 9.3. Use Cases 9.4. Example and Tutorial References Chapter 10: Spectral Clustering 10.1. Explanation and Working 10.2. Advantages and Limitations 10.3. Use Cases 10.4. Examples and Tutorial References Chapter 11: Laplacian Eigenmap 11.1. Explanation and Working 11.2. Advantages and Limitations 11.3. Use Cases 11.4. Examples and Tutorial References Chapter 12: Maximum Variance Unfolding 12.1. Explanation and Working 12.1.1. Constraints on the Optimization 12.1.2. Objective Function 12.2. Advantages and Limitations 12.3. Use Cases References Chapter 13: t-Distributed Stochastic Neighbor Embedding (t-SNE) 13.1. Explanation and Working 13.1.1. Stochastic Neighbor Embedding (SNE) 13.2. Advantages and Limitations 13.3. Use Cases 13.4. Examples References Chapter 14: Comparative Analysis of Dimensionality Reduction Techniques 14.1. Introduction 14.1.1. Dimensionality Reduction Techniques 14.2. Convex Dimensionality Reduction Techniques 14.2.1. Full Spectral Techniques 14.2.2. Sparse Spectral Techniques 14.3. Non-Convex Techniques for Dimensionality Reduction 14.4. Comparison of Dimensionality Reduction Techniques 14.5. Comparison of Manifold Learning Methods with Example 14.6. Discussion References Glossary of Words and Concepts Index
Donate to keep this site alive
1. Disable the AdBlock plugin. Otherwise, you may not get any links.
2. Solve the CAPTCHA.
3. Click download link.
4. Lead to download server to download.