Algorithms and Data Structures for Massive Datasets
- Length: 285 pages
- Edition: 1
- Language: English
- Publisher: Manning
- Publication Date: 2022-04-26
- ISBN-10: 1617298034
- ISBN-13: 9781617298035
- Sales Rank: #4373006 (See Top 100 Books)
In Algorithms and Data Structures for Massive Datasets, you’ll discover methods for reducing and sketching data so it fits in small memory without losing accuracy, and unlock the algorithms and data structures that form the backbone of a big data system.
Data structures and algorithms that are great for traditional software may quickly slow or fail altogether when applied to huge datasets. Algorithms and Data Structures for Massive Datasets introduces a toolbox of new techniques that are perfect for handling modern big data applications.
In Algorithms and Data Structures for Massive Datasets, you’ll discover methods for reducing and sketching data so it fits in small memory without losing accuracy, and unlock the algorithms and data structures that form the backbone of a big data system. Filled with fun illustrations and examples from real-world businesses, you’ll learn how each of these complex techniques can be practically applied to maximize the accuracy and throughput of big data processing and analytics.
Algorithms and Data Structures for Massive Datasets brief contents contents preface acknowledgments about this book Who should read this book How this book is organized: A road map About the code liveBook discussion forum about the authors about the cover illustration 1 Introduction 1.1 An example 1.1.1 An example: How to solve it 1.1.2 How to solve it, take two: A book walkthrough 1.2 The structure of this book 1.3 What makes this book different and whom it is for 1.4 Why is massive data so challenging for today’s systems? 1.4.1 The CPU memory performance gap 1.4.2 Memory hierarchy 1.4.3 Latency vs. bandwidth 1.4.4 What about distributed systems? 1.5 Designing algorithms with hardware in mind Summary Part 1—Hash-based sketches 2 Review of hash tables and modern hashing 2.1 Ubiquitous hashing 2.2 A crash course on data structures 2.3 Usage scenarios in modern systems 2.3.1 Deduplication in backup/storage solutions 2.3.2 Plagiarism detection with MOSS and Rabin–Karp fingerprinting 2.4 O(1)—What's the big deal? 2.5 Collision resolution: Theory vs. practice 2.6 Usage scenario: How Python’s dict does it 2.7 MurmurHash 2.8 Hash tables for distributed systems: Consistent hashing 2.8.1 A typical hashing problem 2.8.2 Hashring 2.8.3 Lookup 2.8.4 Adding a new node/resource 2.8.5 Removing a node 2.8.6 Consistent hashing scenario: Chord 2.8.7 Consistent hashing: Programming exercises Summary 3 Approximate membership: Bloom and quotient filters 3.1 How it works 3.1.1 Insert 3.1.2 Lookup 3.2 Use cases 3.2.1 Bloom filters in networks: Squid 3.2.2 Bitcoin mobile app 3.3 A simple implementation 3.4 Configuring a Bloom filter 3.4.1 Playing with Bloom filters: Mini experiments 3.5 A bit of theory 3.5.1 Can we do better? 3.6 Bloom filter adaptations and alternatives 3.7 Quotient filter 3.7.1 Quotienting 3.7.2 Understanding metadata bits 3.7.3 Inserting into a quotient filter: An example 3.7.4 Python code for lookup 3.7.5 Resizing and merging 3.7.6 False positive rate and space considerations 3.8 Comparison between Bloom filters and quotient filters Summary 4 Frequency estimation and count-min sketch 4.1 Majority element 4.1.1 General heavy hitters 4.2 Count-min sketch: How it works 4.2.1 Update 4.2.2 Estimate 4.3 Use cases 4.3.1 Top-k restless sleepers 4.3.2 Scaling the distributional similarity of words 4.4 Error vs. space in count-min sketch 4.5 A simple implementation of count-min sketch 4.5.1 Exercises 4.5.2 Intuition behind the formula: Math bit 4.6 Range queries with count-min sketch 4.6.1 Dyadic intervals 4.6.2 Update phase 4.6.3 Estimate phase 4.6.4 Computing dyadic intervals Summary 5 Cardinality estimation and HyperLogLog 5.1 Counting distinct items in databases 5.2 HyperLogLog incremental design 5.2.1 The first cut: Probabilistic counting 5.2.2 Stochastic averaging, or “when life gives you lemons” 5.2.3 LogLog 5.2.4 HyperLogLog: Stochastic averaging with harmonic mean 5.3 Use case: Catching worms with HLL 5.4 But how does it work? A mini experiment 5.4.1 The effect of the number of buckets (m) 5.5 Use case: Aggregation using HyperLogLog Summary Part 2—Real-time analytics 6 Streaming data: Bringing everything together 6.1 Streaming data system: A meta example 6.1.1 Bloom-join 6.1.2 Deduplication 6.1.3 Load balancing and tracking the network traffic 6.2 Practical constraints and concepts in data streams 6.2.1 In real time 6.2.2 Small time and small space 6.2.3 Concept shifts and concept drifts 6.2.4 Sliding window model 6.3 Math bit: Sampling and estimation 6.3.1 Biased sampling strategy 6.3.2 Estimation from a representative sample Summary 7 Sampling from data streams 7.1 Sampling from a landmark stream 7.1.1 Bernoulli sampling 7.1.2 Reservoir sampling 7.1.3 Biased reservoir sampling 7.2 Sampling from a sliding window 7.2.1 Chain sampling 7.2.2 Priority sampling 7.3 Sampling algorithms comparison 7.3.1 Simulation setup: Algorithms and data Summary 8 Approximate quantiles on data streams 8.1 Exact quantiles 8.2 Approximate quantiles 8.2.1 Additive error 8.2.2 Relative error 8.2.3 Relative error in the data domain 8.3 T-digest: How it works 8.3.1 Digest 8.3.2 Scale functions 8.3.3 Merging t-digests 8.3.4 Space bounds for t-digest 8.4 Q-digest 8.4.1 Constructing a q-digest from scratch 8.4.2 Merging q-digests 8.4.3 Error and space considerations in q-digests 8.4.4 Quantile queries with q-digests 8.5 Simulation code and results Summary Part 3—Data structures for databases and external memory algorithms 9 Introducing the external memory model 9.1 External memory model: The preliminaries 9.2 Example 1: Finding a minimum 9.2.1 Use case: Minimum median income 9.3 Example 2: Binary search 9.3.1 Bioinformatics use case 9.3.2 Runtime analysis 9.4 Optimal searching 9.5 Example 3: Merging K sorted lists 9.5.1 Merging time/date logs 9.5.2 External memory model: Simple or simplistic? 9.6 What’s next Summary 10 Data structures for databases: B-trees, Be-trees, and LSM-trees 10.1 How indexing works 10.2 Data structures in this chapter 10.3 B-trees 10.3.1 B-tree balancing 10.3.2 Lookup 10.3.3 Insert 10.3.4 Delete 10.3.5 B+-trees 10.3.6 How operations on a B+-tree are different 10.3.7 Use case: B-trees in MySQL (and many other places) 10.4 Math bit: Why are B-tree lookups optimal in external memory? 10.4.1 Why B-tree inserts/deletes are not optimal in external memory 10.5 Be-trees 10.5.1 Be-tree: How it works 10.5.2 Buffering mechanics 10.5.3 Inserts and deletes 10.5.4 Lookups 10.5.5 Cost analysis 10.5.6 Be-tree: The spectrum of data structures 10.5.7 Use case: Be-trees in TokuDB 10.5.8 Make haste slowly, the I/O way 10.6 Log-structured merge-trees (LSM-trees) 10.6.1 The LSM-tree: How it works 10.6.2 LSM-tree cost analysis 10.6.3 Use case: LSM-trees in Cassandra Summary 11 External memory sorting 11.1 Sorting use cases 11.1.1 Robot motion planning 11.1.2 Cancer genomics 11.2 Challenges of sorting in external memory: An example 11.2.1 Two-way merge-sort in external memory 11.3 External memory merge-sort (M/B-way merge-sort) 11.3.1 Searching and sorting in RAM vs. external memory 11.4 What about external quick-sort? 11.4.1 External memory two-way quick-sort 11.4.2 Toward external memory multiway quick-sort 11.4.3 Finding enough pivots 11.4.4 Finding good enough pivots 11.4.5 Putting it all back together 11.5 Math bit: Why is external memory merge-sort optimal? 11.6 Wrapping up Summary references Chapter Chapter Chapter Chapter Chapter Chapter Chapter Chapter Chapter Chapter Chapter index Numerics A B C D E F G H I K L M N O P Q R S T U V W
1. Disable the AdBlock plugin. Otherwise, you may not get any links.
2. Solve the CAPTCHA.
3. Click download link.
4. Lead to download server to download.