A Primer on Hardware Prefetching (Synthesis Lectures on Computer Architecture)
暫譯: 硬體預取入門(計算機架構綜合講座)
Babak Falsafi, Thomas F. Wenisch
- 出版商: Morgan & Claypool
- 出版日期: 2014-05-01
- 售價: $1,290
- 貴賓價: 9.5 折 $1,226
- 語言: 英文
- 頁數: 68
- 裝訂: Paperback
- ISBN: 1608459527
- ISBN-13: 9781608459520
海外代購書籍(需單獨結帳)
買這商品的人也買了...
-
$580$458 -
$1,600$1,520 -
$1,250$1,188 -
$1,450Probability and Stochastic Processes: A Friendly Introduction for Electrical and Computer Engineers, 3/e (IE-Paperback)
-
$1,323Essentials of Discrete Mathematics, 3/e (Hardcover)
-
$709深入理解計算機系統, 3/e (Computer Systems: A Programmer's Perspective, 3/e)
-
$3,430Computer Architecture : A Quantitative Approach, 6/e (Paperback)
-
$2,230$2,119 -
$420$357 -
$1,568Introduction to Compiler Design: An Object-Oriented Approach Using Java(R)
-
$1,800$1,710 -
$2,146Introduction to Algorithms, 4/e (Hardcover)
-
$714$678 -
$834$792 -
$1,960$1,862
相關主題
商品描述
Since the 1970’s, microprocessor-based digital platforms have been riding Moore’s law, allowing for doubling of density for the same area roughly every two years. However, whereas microprocessor fabrication has focused on increasing instruction execution rate, memory fabrication technologies have focused primarily on an increase in capacity with negligible increase in speed. This divergent trend in performance between the processors and memory has led to a phenomenon referred to as the “Memory Wall.”
To overcome the memory wall, designers have resorted to a hierarchy of cache memory levels, which rely on the principal of memory access locality to reduce the observed memory access time and the performance gap between processors and memory. Unfortunately, important workload classes exhibit adverse memory access patterns that baffle the simple policies built into modern cache hierarchies to move instructions and data across cache levels. As such, processors often spend much time idling upon a demand fetch of memory blocks that miss in higher cache levels. Prefetching—predicting future memory accesses and issuing requests for the corresponding memory blocks in advance of explicit accesses—is an effective approach to hide memory access latency. There have been a myriad of proposed prefetching techniques, and nearly every modern processor includes some hardware prefetching mechanisms targeting simple and regular memory access patterns. This primer offers an overview of the various classes of hardware prefetchers for instructions and data proposed in the research literature, and presents examples of techniques incorporated into modern microprocessors.
Table of Contents: Preface / Introduction / Instruction Prefetching / Data Prefetching / Concluding Remarks / Bibliography / Author Biographies
商品描述(中文翻譯)
自1970年代以來,基於微處理器的數位平台一直遵循摩爾定律,使得相同面積的密度大約每兩年翻倍。然而,微處理器的製造主要集中在提高指令執行速率,而記憶體的製造技術則主要專注於提高容量,速度的增長則微不足道。處理器與記憶體之間性能的這種分歧趨勢導致了一種現象,稱為「記憶體牆」(Memory Wall)。
為了克服記憶體牆,設計者採用了多層次的快取記憶體層級,這依賴於記憶體存取局部性的原則,以減少觀察到的記憶體存取時間以及處理器與記憶體之間的性能差距。不幸的是,重要的工作負載類別顯示出不利的記憶體存取模式,這使得現代快取層級中內建的簡單政策無法有效地移動指令和數據。因而,處理器經常在需求提取缺失於較高快取層級的記憶體區塊時,花費大量時間閒置。預取(Prefetching)——預測未來的記憶體存取並在明確存取之前提前發出對應記憶體區塊的請求——是一種有效的隱藏記憶體存取延遲的方法。已經提出了無數的預取技術,幾乎每個現代處理器都包含一些針對簡單和規則的記憶體存取模式的硬體預取機制。本書簡介提供了研究文獻中提出的各類硬體預取器的概述,並展示了納入現代微處理器的技術範例。
目錄:前言 / 介紹 / 指令預取 / 數據預取 / 總結 / 參考文獻 / 作者簡介