Monday 11 August 2014

Decimal computation

Decimal computation was carried out in ancient times in lots of ways, usually in rod calculus, with decimal multiplication table used in ancient China & with sand tables in India & Middle East or with a variety of abaci.

Modern computer hardware & application systems often use a binary representation internally (although lots of early computers, such as the ENIAC or the IBM 650, used decimal representation internally).[5] For outside use by computer specialists, this binary representation is sometimes introduced in the related octal or hexadecimal systems.

For most purposes, however, binary values are converted to or from the equivalent decimal values for presentation to or input from humans; computer programs express literals in decimal by default. (123.1, for example, is written as such in a computer program, although lots of computer languages are unable to encode that number exactly.)

Both computer hardware & application also use internal representations which are effectively decimal for storing decimal values & doing arithmetic. Often this arithmetic is done on information which are encoded using some variant of binary-coded decimal,[6] in database implementations, but there's other decimal representations in use (such as in the new IEEE 754 Standard for Floating-Point Arithmetic).[7]

Decimal arithmetic is used in computers so that decimal fractional results can be computed exactly, which is impossible using a binary fractional representation. This is often important for financial & other calculations.[8]

No comments:

Post a Comment