Models and Algorithms for Unlabelled Data
- Length: 250 pages
- Edition: 1
- Language: English
- Publisher: Manning
- Publication Date: 2022-05-31
- ISBN-10: 1617298727
- ISBN-13: 9781617298721
- Sales Rank: #0 (See Top 100 Books)
Discover all-practical implementations of the key algorithms and models for handling unlabelled data. Full of case studies demonstrating how to apply each technique to real-world problems.
Models and Algorithms for Unlabelled Data introduces mathematical techniques, key algorithms, and Python implementations that will help you build machine learning models for unannotated data.
You’ll master everything from kmeans and hierarchical clustering, to advanced neural networks like GANs and Restricted Boltzmann Machines. You’ll learn the business use case for different models, and master best practices for structured, text, and image data. Each new algorithm is introduced with a case study for retail, aviation, banking, and more—and you’ll develop a Python solution to fix each of these real-world problems. At the end of each chapter, you’ll find quizzes, practice datasets, and links to research papers to help you lock in what you’ve learned and expand your knowledge.
Purchase of the print book includes a free eBook in PDF, Kindle, and ePub formats from Manning Publications.
Models and Algorithms for Unlabelled Data MEAP V03 Copyright welcome brief contents 1: Introduction to machine learning 1.1 Data, data types, data management and quality 1.1.1 What is Data 1.1.2 Various types of Data 1.1.3 Data quality 1.1.4 Data engineering and management 1.2 Data Analysis, Machine Learning, Artificial Intelligence and Business Intelligence 1.3 Nuts and bolts of Machine Learning 1.4 Types of Machine Learning algorithms 1.4.1 Supervised Learning 1.4.2 Unsupervised algorithms 1.4.3 Semi-supervised algorithms 1.4.4 Reinforcement learning 1.5 Technical toolkit 1.6 Summary 2: Clustering techniques 2.1 Technical toolkit 2.2 Clustering 2.2.1 Clustering techniques 2.3 Centroid based clustering 2.3.1 K-means clustering 2.3.2 Measure the accuracy of clustering 2.3.3 Finding the optimum value of “k” 2.3.4 Pros and cons of k-means clustering 2.3.5 k-means clustering implementation using Python 2.4 Connectivity based clustering 2.4.1 Types of hierarchical clustering 2.4.2 Linkage criterion for distance measurement 2.4.3 Optimal number of clusters 2.4.4 Pros and cons of hierarchical clustering 2.4.5 Hierarchical clustering case study using Python 2.5 Density based clustering 2.5.1 Neighborhood and density 2.5.2 DBSCAN Clustering 2.6 Case study using clustering 2.7 Common challenges faced in clustering 2.8 Summary 3: Dimensionality reduction 3.1 Technical toolkit 3.2 Curse of Dimensionality 3.3 Dimension reduction methods 3.3.1 Mathematical foundation 3.4 Manual methods of dimensionality reduction 3.4.1 Algorithm based methods for reducing dimensions 3.5 Principal Component Analysis (PCA) 3.5.1 Eigenvalue Decomposition 3.5.2 Python solution using Eigenvalue Decomposition 3.6 Singular Value Decomposition (SVD) 3.6.1 Python solution using SVD 3.7 Pros and cons of dimensionality reduction 3.8 Case study for dimension reduction 3.9 Summary 4: Association rules 4.1 Technical toolkit 4.2 Association rule learning 4.3 Building blocks of association rule 4.3.1 Support, confidence, lift, and conviction 4.3.2 Support 4.3.3 Confidence 4.3.4 Lift and conviction 4.4 Apriori algorithm 4.4.1 Python implementation 4.4.2 Challenges with Apriori algorithm 4.5 Equivalence class clustering and bottom-up lattice traversal (ECLAT) 4.5.1 Python implementation 4.6 Frequent-Pattern growth algorithm (F-P algorithm) 4.7 Sequence rule mining 4.7.1 SPADE 4.8 Case study for association rules 4.9 Limitations and Summary
Donate to keep this site alive
1. Disable the AdBlock plugin. Otherwise, you may not get any links.
2. Solve the CAPTCHA.
3. Click download link.
4. Lead to download server to download.