Artificial Intelligence Hardware Design: Challenges and Solutions
- Length: 240 pages
- Edition: 1
- Language: English
- Publisher: Wiley-IEEE Press
- Publication Date: 2021-08-31
- ISBN-10: 1119810450
- ISBN-13: 9781119810452
- Sales Rank: #1820033 (See Top 100 Books)
ARTIFICIAL INTELLIGENCE HARDWARE DESIGN
Learn foundational and advanced topics in Neural Processing Unit design with real-world examples from leading voices in the field
In Artificial Intelligence Hardware Design: Challenges and Solutions, distinguished researchers and authors Drs. Albert Chun Chen Liu and Oscar Ming Kin Law deliver a rigorous and practical treatment of the design applications of specific circuits and systems for accelerating neural network processing. Beginning with a discussion and explanation of neural networks and their developmental history, the book goes on to describe parallel architectures, streaming graphs for massive parallel computation, and convolution optimization.
The authors offer readers an illustration of in-memory computation through Georgia Tech’s Neurocube and Stanford’s Tetris accelerator using the Hybrid Memory Cube, as well as near-memory architecture through the embedded eDRAM of the Institute of Computing Technology, the Chinese Academy of Science, and other institutions.
Readers will also find a discussion of 3D neural processing techniques to support multiple layer neural networks, as well as information like:
- A thorough introduction to neural networks and neural network development history, as well as Convolutional Neural Network (CNN) models
- Explorations of various parallel architectures, including the Intel CPU, Nvidia GPU, Google TPU, and Microsoft NPU, emphasizing hardware and software integration for performance improvement
- Discussions of streaming graph for massive parallel computation with the Blaize GSP and Graphcore IPU
- An examination of how to optimize convolution with UCLA Deep Convolutional Neural Network accelerator filter decomposition
Perfect for hardware and software engineers and firmware developers, Artificial Intelligence Hardware Design is an indispensable resource for anyone working with Neural Processing Units in either a hardware or software capacity.
Cover Table of Contents Series Page Title Page Copyright Page Author Biographies Preface Acknowledgments Table of Figures 1 Introduction 1.1 Development History 1.2 Neural Network Models 1.3 Neural Network Classification 1.4 Neural Network Framework 1.5 Neural Network Comparison Exercise References 2 Deep Learning 2.1 Neural Network Layer 2.2 Deep Learning Challenges Exercise References 3 Parallel Architecture 3.1 Intel Central Processing Unit (CPU) 3.2 NVIDIA Graphics Processing Unit (GPU) 3.3 NVIDIA Deep Learning Accelerator (NVDLA) 3.4 Google Tensor Processing Unit (TPU) 3.5 Microsoft Catapult Fabric Accelerator Exercise References 4 Streaming Graph Theory 4.1 Blaize Graph Streaming Processor 4.2 Graphcore Intelligence Processing Unit Exercise References 5 Convolution Optimization 5.1 Deep Convolutional Neural Network Accelerator 5.2 Eyeriss Accelerator Exercise References 6 In‐Memory Computation 6.1 Neurocube Architecture 6.2 Tetris Accelerator 6.3 NeuroStream Accelerator Exercise References 7 Near‐Memory Architecture 7.1 DaDianNao Supercomputer 7.2 Cnvlutin Accelerator Exercise References 8 Network Sparsity 8.1 Energy Efficient Inference Engine (EIE) 8.2 Cambricon‐X Accelerator 8.3 SCNN Accelerator 8.4 SeerNet Accelerator Exercise References 9 3D Neural Processing 9.1 3D Integrated Circuit Architecture 9.2 Power Distribution Network 9.3 3D Network Bridge 9.4 Power‐Saving Techniques Exercise References Appendix A: Neural Network Topology Index End User License Agreement
Donate to keep this site alive
1. Disable the AdBlock plugin. Otherwise, you may not get any links.
2. Solve the CAPTCHA.
3. Click download link.
4. Lead to download server to download.