Article | REF: H1088 V2

Introduction to parallelism and parallel architectures

Authors: Franck CAPPELLO, Daniel ETIEMBLE

Publication date: August 10, 2017

You do not have access to this resource.
Click here to request your free trial access!

Already subscribed? Log in!


Overview

Français

ABSTRACT

Since the early 2000s, parallelism has found use in most computer architectures, from embedded systems to supercomputers. Multi-core processors have replaced uniprocessors. This article describes parallelism and its different types. It presents the main classes of parallel architectures with their resources and memory organizations, in both homogeneous and heterogeneous architectures. The basic parallel programming techniques are introduced with the parallel extensions of commonly used programming languages, and the programming models designed to close the gap with sequential programming, while allowing for the specific features of parallel architectures. Finally, performance evaluation is presented with metrics and performance models.

Read this article from a comprehensive knowledge base, updated and supplemented with articles reviewed by scientific committees.

Read the article

AUTHORS

  • Franck CAPPELLO: Doctorate in Computer Science from Université Paris Sud - IEEE Fellow

  • Daniel ETIEMBLE: Engineer from INSA Lyon - Professor Emeritus, Université Paris Sud - Editor's note: This article is the updated version of the article [H 1 088] entitled Introduction au parallélisme et aux architectures parallèles, by Franck CAPPELLO and Jean-Paul SANSONNET, which appeared in our editions in 1999.

 INTRODUCTION

The notion of parallelism - using several processors or hardware operators to run one or more programs - is an old one. Multiprocessors date back to the 1960s. From then until the end of the 1990s, parallel architectures were used for applications requiring computing power that single-processor systems were unable to provide. These included mainframes and servers on the one hand, and vector and then parallel machines used for high-performance scientific computing on the other. The 1980s saw the emergence of a number of companies offering parallel machines, which soon disappeared. The main reason for this was the exponential growth in microprocessor performance, used in PCs and multiprocessor servers. The massive use of parallelism was limited to very large-scale numerical simulation applications with massively parallel architectures. The early 2000s, with the limitations of single-processors and the "wall of heat", completely changed the situation (see [H 1 058] ). In 2016, multi-core processors can be found in hardware architectures for all types of components: mobile devices (smartphones, tablets), embedded systems, televisions, laptops and desktop PCs, right up to parallel machines and supercomputers for very high performance.

In this article, we introduce the notion of parallelism, present the different types of parallelism and the different forms of parallel architectures. While programming parallel machines has long been reserved for specialists, any programmer should now master the essential notions of parallel programming to take advantage of the possibilities offered by these architectures. We present the parallel extensions to commonly used programming languages, and the programming models developed to bring parallel programming "closer" to sequential programming techniques, while taking into account the specific features of parallel architectures. Finally, parallel architectures are particularly interesting in terms of performance. To optimize these performances and/or reduce energy consumption, it is necessary to model both the parallelism existing in an application and the parallel architectures. We therefore examine the metrics used to evaluate or predict performance, and the main laws that govern them.

You do not have access to this resource.

Exclusive to subscribers. 97% yet to be discovered!

You do not have access to this resource.
Click here to request your free trial access!

Already subscribed? Log in!


The Ultimate Scientific and Technical Reference

A Comprehensive Knowledge Base, with over 1,200 authors and 100 scientific advisors
+ More than 10,000 articles and 1,000 how-to sheets, over 800 new or updated articles every year
From design to prototyping, right through to industrialization, the reference for securing the development of your industrial projects

KEYWORDS

data and control parallelism   |   SIMD extensions   |   Flynn's taxonomy   |   shared and distributed memories   |   execution models   |   programming models   |   OpenMP   |   MPI   |   pThreads   |   Amdhal's law   |   Roofline model


This article is included in

Mathematics

This offer includes:

Knowledge Base

Updated and enriched with articles validated by our scientific committees

Services

A set of exclusive tools to complement the resources

Practical Path

Operational and didactic, to guarantee the acquisition of transversal skills

Doc & Quiz

Interactive articles with quizzes, for constructive reading

Subscribe now!

Ongoing reading
Introduction to parallelism and parallel architectures
Outline