caches
Table of Contents
Caches
Caching is a technique for improving the performance of any process if the process is likely to be executed more than once. It is used for example by browsers to speed up access to web pages when they are re-visited. It is also used by RAID controllers to speed up access to disk, by TLBs to speed up the virtual-to-physical address translation, and by CPUs to speed up DRAM access. This lecture explores the concepts that underlie caching. It covers single versus multi-word blocks, direct-mapped versus associative caches, and highlights the issues related to data, multi-level, and TLB caches.
Outline
- The caching idea
- The hashing function
- The byte offset, index, and tag fields – redundant bits
- Multi-Word Blocks
- Fully-Associative and Set-Associative Caches
- Data Caches: write-back, write-through, and cache splitting
- L1 and L2 Caches
- TLB and virtual memory
Big Ideas
- Utilizing Temporal Locality
- Utilizing Spatial Locality
Slides from Lecture
To Do
- Read Sections 7.1 through 7.3 of the textbook.
- Do the cache exercises (in the Resources page).
caches.txt · Last modified: 2007/11/24 21:21 by roumani