Data Orchestration in Deep Learning Accelerators
暫譯: 深度學習加速器中的數據調度

Krishna, Tushar, Kwon, Hyoukjun, Parashar, Angshuman

  • 出版商: Morgan & Claypool
  • 出版日期: 2020-08-18
  • 售價: $2,500
  • 貴賓價: 9.5$2,375
  • 語言: 英文
  • 頁數: 164
  • 裝訂: Quality Paper - also called trade paper
  • ISBN: 1681738694
  • ISBN-13: 9781681738697
  • 相關分類: DeepLearning
  • 立即出貨 (庫存=1)

買這商品的人也買了...

相關主題

商品描述

This Synthesis Lecture focuses on techniques for efficient data orchestration within DNN accelerators. The End of Moore's Law, coupled with the increasing growth in deep learning and other AI applications has led to the emergence of custom Deep Neural Network (DNN) accelerators for energy-efficient inference on edge devices. Modern DNNs have millions of hyper parameters and involve billions of computations; this necessitates extensive data movement from memory to on-chip processing engines. It is well known that the cost of data movement today surpasses the cost of the actual computation; therefore, DNN accelerators require careful orchestration of data across on-chip compute, network, and memory elements to minimize the number of accesses to external DRAM. The book covers DNN dataflows, data reuse, buffer hierarchies, networks-on-chip, and automated design-space exploration. It concludes with data orchestration challenges with compressed and sparse DNNs and future trends. The target audience is students, engineers, and researchers interested in designing high-performance and low-energy accelerators for DNN inference.

商品描述(中文翻譯)

這篇綜合講座專注於在深度神經網路(DNN)加速器中進行高效數據協調的技術。摩爾定律的終結,加上深度學習和其他人工智慧應用的快速增長,促使了定制深度神經網路加速器的出現,以實現邊緣設備上的能源高效推理。現代的DNN擁有數百萬個超參數,並涉及數十億次計算;這需要大量的數據從內存移動到片上處理引擎。眾所周知,當前數據移動的成本已超過實際計算的成本;因此,DNN加速器需要仔細協調片上計算、網絡和內存元件之間的數據,以最小化對外部DRAM的訪問次數。本書涵蓋了DNN數據流、數據重用、緩衝層次結構、片上網絡以及自動化設計空間探索。最後,討論了壓縮和稀疏DNN的數據協調挑戰及未來趨勢。目標讀者為對設計高性能和低能耗的DNN推理加速器感興趣的學生、工程師和研究人員。