Article | REF: AF1385 V1

Parallel Asynchronous Algorithms I. Modelling and Analysis

Authors: Pierre SPITERI, Jean-Claude MIELLOU

Publication date: June 10, 2021

You do not have access to this resource.
Click here to request your free trial access!

Already subscribed? Log in!


Overview

Français

ABSTRACT

This paper presents asynchronous parallel iterative algorithms and their extensions constituted by subdomain and multisplitting methods, for the solution of large algebraic linear or pseudo-linear systems, possibly constrained. The behavior of these algorithms is studied by three methods using on the one hand a contraction property of the fixed point mapping associated with the system of equations to be solved, and on the other hand monotone convergence properties and finally properties of the successive nested sets where the updated components are located by the iterative algorithm. The link between these three types of analysis is also presented.

Read this article from a comprehensive knowledge base, updated and supplemented with articles reviewed by scientific committees.

Read the article

AUTHORS

  • Pierre SPITERI: Professor Emeritus - University of Toulouse, INP-ENSEEIHT – IRIT, Toulouse, France

  • Jean-Claude MIELLOU: Honorary Professor - University of Bourgogne Franche-Comté, Department of Mathematics, Besançon, France

 INTRODUCTION

The first computers were essentially made up of a memory for storing computer programs and the data they required, an arithmetic and logic unit for performing these types of operations, and exchange units for peripheral devices (disks, printers, etc.). Because of the presence of a single arithmetic and logic unit, on this type of architecture, a calculation code is executed in sequential mode on this single resource; this corresponds to the execution of instructions one after the other, so only one at a time, even if the operations to be performed are independent. This type of sequential execution has rapidly reached its performance limits when the calculation does not allow results to be obtained within a reasonable timeframe, and requires significant memory storage. Thus, for calculations required to handle large applications, such as meteorology, this type of sequential programming is no longer suitable, and calculation performance is not satisfactory for obtaining simulation results in relatively short times.

To overcome these limitations, parallel computing has emerged as a solution. To improve computer performance, this solution involves making the best possible use of computing and memory resources. The solution adopted is to parallelize codes, i.e. to perform several calculations simultaneously on different resources, each consisting of an arithmetic and logic unit, also known as a processor, so that a greater number of operations can be performed in a minimum amount of time. To parallelize a calculation code, the user has to break down the overall problem into several smaller, coupled sub-problems. In this way, several computational tasks can be processed simultaneously on these processors, with the coupling between sub-problems being effected by exchanges of results between the processes cooperating in parallel. In this way, provided the parallelized calculation codes are optimized, execution times can be significantly reduced, enabling larger problems to be solved and larger volumes of data to be processed.

This new computing mode requires us to question the architecture of early computers, to adapt the computing program to a parallel execution mode, and to learn new programming languages and tools.

The architectures of these supercomputers have therefore evolved firstly by building machines with common memory, where all processors have access to the latter. However, this type of machine is both expensive to build and doesn't offer the massive parallelism required for large-scale applications. Manufacturers have therefore designed machines with distributed memory, where each processor has its own local memory, with links between these processors via an interconnection network; the latter enables messages to be exchanged between...

You do not have access to this resource.

Exclusive to subscribers. 97% yet to be discovered!

You do not have access to this resource.
Click here to request your free trial access!

Already subscribed? Log in!


The Ultimate Scientific and Technical Reference

A Comprehensive Knowledge Base, with over 1,200 authors and 100 scientific advisors
+ More than 10,000 articles and 1,000 how-to sheets, over 800 new or updated articles every year
From design to prototyping, right through to industrialization, the reference for securing the development of your industrial projects

KEYWORDS

high performance computing   |   subdomain method   |   multisplitting method   |   large algebraic linear or pseudo-linear systems


This article is included in

Mathematics

This offer includes:

Knowledge Base

Updated and enriched with articles validated by our scientific committees

Services

A set of exclusive tools to complement the resources

Practical Path

Operational and didactic, to guarantee the acquisition of transversal skills

Doc & Quiz

Interactive articles with quizzes, for constructive reading

Subscribe now!

Ongoing reading
Asynchronous parallel algorithms I