Article | REF: H1002 V1

Memory hierarchy: caches

Authors: Daniel ETIEMBLE, François ANCEAU

Publication date: August 10, 2012, Review date: March 8, 2022

You do not have access to this resource.
Click here to request your free trial access!

Already subscribed? Log in!


Overview

Français

ABSTRACT

A huge difference exists between the CPU speed and the access times and bandwidths of the different types of memories that are used in computer systems. Access times increase and bandwidths decrease as one moves away from the CPU. This article describes the principles and the functioning of the cache hierarchies that are located between the CPU and the main memory, both for single processor computers and multiprocessor and multicore ones. Basic features and techniques to improve cache performance are introduced. Different cache coherency protocols are presented. The interactions between caches and secondary memories such as disks and storage units are described. Finally, the main software optimizations for cache hierarchies are mentioned.

Read this article from a comprehensive knowledge base, updated and supplemented with articles reviewed by scientific committees.

Read the article

AUTHORS

 INTRODUCTION

The aim of this dossier is to study the hierarchy of caches located between a computer's processor(s) and main memory. There is an enormous difference in performance between the operating speeds of a processor and, more generally, the access times and transfer rates between memory elements located on a computer chip and the access times and transfer rates between different chips. Between a processor and its main memory, there's a hierarchy of caches, some on the processor chip, others on external chips, which act as throughput and access time adapters, since throughputs decrease and access times increase with distance from the processor. The other part of the memory hierarchy, between main memory and disks and other storage units, is the subject of a separate dossier.

This dossier presents the operating principles of caches and hardware techniques for improving performance, whether for low-end single-processor systems, systems with processors executing several instructions per cycle, or parallel systems using multi-core processors or clusters of multi-cores. The various techniques for ensuring cache coherence are presented, from basic centralized or decentralized protocols to protocols for hierarchical architectures.

Techniques for limiting the impact of caches and the relationship between cache operation and secondary memories, including address translations linked to virtual memory, are also covered.

While the dossier focuses primarily on hardware techniques for implementing cache hierarchies, the impact of caches on program execution times is highlighted by presenting classic software optimization techniques that take the existence of caches into account.

You do not have access to this resource.

Exclusive to subscribers. 97% yet to be discovered!

You do not have access to this resource.
Click here to request your free trial access!

Already subscribed? Log in!


The Ultimate Scientific and Technical Reference

A Comprehensive Knowledge Base, with over 1,200 authors and 100 scientific advisors
+ More than 10,000 articles and 1,000 how-to sheets, over 800 new or updated articles every year
From design to prototyping, right through to industrialization, the reference for securing the development of your industrial projects

KEYWORDS

cache feature   |   cache coherency   |   multiprocessor and multicore caches   |   software optimizations


This article is included in

Software technologies and System architectures

This offer includes:

Knowledge Base

Updated and enriched with articles validated by our scientific committees

Services

A set of exclusive tools to complement the resources

Practical Path

Operational and didactic, to guarantee the acquisition of transversal skills

Doc & Quiz

Interactive articles with quizzes, for constructive reading

Subscribe now!

Ongoing reading
Memory hierarchy: caches