Artificial Intelligence Hardware Design: Challenges and Solutions (Hardcover)
暫譯: 人工智慧硬體設計:挑戰與解決方案 (精裝版)
Liu, Albert Chun-Chen, Law, Oscar Ming Kin
買這商品的人也買了...
-
$820$779 -
$490$387 -
$594$564 -
$320$288 -
$880$792 -
$505手把手教你設計 CPU-RISC-V 處理器篇
-
$2,080$1,976 -
$1,950$1,853 -
$1,074$1,020 -
$680$537 -
$474$450 -
$750$675 -
$768$730 -
$412UVM 實戰
-
$480$408 -
$2,146Introduction to Algorithms, 4/e (Hardcover)
-
$576$547 -
$479$455 -
$1,615Understanding Artificial Intelligence: Fundamentals and Applications (Hardcover)
-
$2,520Hands-On Machine Learning with Scikit-Learn, Keras, and Tensorflow: Concepts, Tools, and Techniques to Build Intelligent Systems, 3/e (Paperback)
-
$720$612 -
$2,050$1,948 -
$780$616 -
$2,993Machine Learning Theory and Applications: Hands-On Use Cases with Python on Classical and Quantum Machines
-
$380$323
相關主題
商品描述
This book covers the design applications of specific circuits and systems for accelerating neural network processing. Chapter 1 introduces neural networks and discusses its developmental history. Chapter 2 reviews Convolutional Neural Network model (CNN) and describes each layer function and example. Chapter 3 lists parallel architectures such as Intel CPU, Nvidia GPU, Google TPU and Microsoft NPU. Chapter 4 introduces a streaming graph for massive parallel computation through Blaize GSP and Graphcore IPU. Chapter 5 shows how to optimize convolution with UCLA's Deep Convolutional Neural Network (DCNN) accelerator filter decomposition and MIT's Eyeriss accelerator Row Stationary dataflow. Chapter 6 illustrates in-memory computation through Georgia Tech's Neurocube and Stanford's Tetris accelerator using Hybrid Memory Cube (HMC). Chapter 7 highlights near-memory architecture through the embedded eDRAM of Institute of Computing Technology (ICT), Chinese Academy of Science, DaDianNao supercomputer, and others. Chapter 8 describes how Stanford Energy Efficient Inference Engine, Institute of Computing Technology (ICT) and others handle network sparsity through network pruning. Chapter 9 introduces a 3D neural processing technique to support multiple layers neural network. It also offers network bridge to overcome power and thermal challenges as well as the memory bottleneck.
商品描述(中文翻譯)
這本書涵蓋了加速神經網絡處理的特定電路和系統的設計應用。第一章介紹了神經網絡並討論其發展歷史。第二章回顧了卷積神經網絡模型(Convolutional Neural Network, CNN),並描述了每一層的功能和範例。第三章列出了並行架構,如 Intel CPU、Nvidia GPU、Google TPU 和 Microsoft NPU。第四章介紹了通過 Blaize GSP 和 Graphcore IPU 進行大規模並行計算的流圖。第五章展示了如何通過 UCLA 的深度卷積神經網絡(Deep Convolutional Neural Network, DCNN)加速器的濾波器分解和 MIT 的 Eyeriss 加速器的行靜態數據流來優化卷積。第六章通過喬治亞理工學院的 Neurocube 和斯坦福大學的 Tetris 加速器使用混合記憶體立方體(Hybrid Memory Cube, HMC)來說明內存計算。第七章通過中國科學院計算技術研究所(Institute of Computing Technology, ICT)、大電腦(DaDianNao)超級計算機等的嵌入式 eDRAM 突出近內存架構。第八章描述了斯坦福大學的能源高效推理引擎、計算技術研究所(ICT)等如何通過網絡剪枝處理網絡稀疏性。第九章介紹了一種三維神經處理技術,以支持多層神經網絡。它還提供了網絡橋接,以克服功率和熱挑戰以及內存瓶頸。
作者簡介
Albert Liu, PhD, is Chief Executive Officer of Kneron. He is Adjunct Associate Professor at National Tsing Hua University, National Chiao Tung University, and National Cheng Kung University. He has published over 15 IEEE papers and is an IEEE Senior Member.
Oscar Ming Kin Law, PhD, is Senior Staff Member of Physical Design at Qualcomm Inc. He has over twenty years of experience in the semiconductor industry working with CPUs, GPUs, FPGAs, and mobile design.
作者簡介(中文翻譯)
劉阿伯特博士是Kneron的首席執行官。他是國立清華大學、國立交通大學和國立成功大學的兼任副教授。他已發表超過15篇IEEE論文,並且是IEEE的資深會員。
羅明健博士是高通公司(Qualcomm Inc.)的高級員工,專注於物理設計。他在半導體產業擁有超過二十年的經驗,涉及CPU、GPU、FPGA和移動設計。