1. Introduction
Computer processors work exclusively with binary digits (0,1) called bits. In order to represent numbers, they must be encoded. The number of bits used to encode a number determines the quantity of numbers that can be represented: with N bits, we can have 2 N different numbers. The most commonly used types of encoding are integer and floating-point numbers, found in all general-purpose processors, as well as a number of representations found in more specialized processors, such as signal processors.
32-bit general-purpose processors have 32-bit integers, as well as 16-bit and 8-bit subsets, and 32-bit and 64-bit floats. 64-bit general-purpose processors add 64-bit integers to the above list.
To cope with the constraints of performance, energy consumption and memory occupation, 16-bit and 8-bit...
Exclusive to subscribers. 97% yet to be discovered!
You do not have access to this resource.
Click here to request your free trial access!
Already subscribed? Log in!
The Ultimate Scientific and Technical Reference
This article is included in
Software technologies and System architectures
This offer includes:
Knowledge Base
Updated and enriched with articles validated by our scientific committees
Services
A set of exclusive tools to complement the resources
Practical Path
Operational and didactic, to guarantee the acquisition of transversal skills
Doc & Quiz
Interactive articles with quizzes, for constructive reading
Introduction
Bibliography
Exclusive to subscribers. 97% yet to be discovered!
You do not have access to this resource.
Click here to request your free trial access!
Already subscribed? Log in!
The Ultimate Scientific and Technical Reference