<--- Back to Details
First PageDocument Content
Central processing unit / Microprocessors / Computer memory / CPU cache / Cache / Runahead / Multi-core processor / Microarchitecture / Memory-level parallelism / Computer hardware / Computer architecture / Computing
Date: 2006-10-02 23:12:53
Central processing unit
Microprocessors
Computer memory
CPU cache
Cache
Runahead
Multi-core processor
Microarchitecture
Memory-level parallelism
Computer hardware
Computer architecture
Computing

Scalable Cache Miss Handling for High Memory-Level Parallelism

Add to Reading List

Source URL: iacoma.cs.uiuc.edu

Download Document from Source Website

File Size: 1,29 MB

Share Document on Facebook

Similar Documents

RICE UNIVERSITY Exploiting Instruction-Level Parallelism for Memory System Performance by Vijay S. Pai

DocID: 1t1wH - View Document

Cache / Central processing unit / Computer memory / Computer architecture / Parallel computing / CPU cache / Locality of reference / Benchmark / Microarchitecture / Instruction set / Instruction-level parallelism / Draft:Cache memory

Insight into Application Performance Using Application-Dependent Characteristics Waleed Alkohlani1 , Jeanine Cook2 , and Nafiul Siddique1 1 Klipsch School of Electrical and Computer Engineering,

DocID: 1oxIu - View Document

Software pipelining / CPU cache / Cache / Parallel computing / Instruction-level parallelism / Computer engineering / Computer hardware / Computer memory / Computing

Comparing and Combining Read Miss Clustering and Software Prefetching  

DocID: 18Cdd - View Document

Dynamic random-access memory / Synchronous dynamic random-access memory / CAS latency / SDRAM latency / Serial presence detect / Memory controller / DDR3 SDRAM / Random-access memory / CPU cache / Computer memory / Computer hardware / Computing

A Case for Exploiting Subarray-Level Parallelism (SALP) in DRAM Yoongu Kim Vivek Seshadri Donghyuk Lee

DocID: 18zkh - View Document

Computer hardware / Instruction-level parallelism / Stream processing / Central processing unit / Speedup / GPGPU / Very long instruction word / Computer memory / Computing / Parallel computing / Computer architecture

Why GPUs? Robert Strzodka (MPII), Dominik Göddeke (TUDo), Dominik Behr (AMD) PPAM 2009 – Conference on Parallel Processing and Applied Mathematics Wroclaw, Poland, September 13-16, 2009

DocID: 11F0u - View Document