[This artical is based on C# language]
Understanding Decimal Precision of C#
The C# has the following Data Types to handle Mathematical Decimal Precision;
C# type | Approximate range | Precision | Literal Suffix |
---|---|---|---|
float | ±1.5 × 10-45 to ±3.4 × 1038 | ~6-9 digits | The ‘f’ or ‘F’ suffix converts a literal to a float. |
double | ±5.0 × 10-324 to ±1.7 × 10308 | ~15-17 digits | The ‘d’ or ‘D’ suffix converts a literal to a double. |
decimal | ±1.0 × 10-28 to ±7.9228 × 1028 | ~28-29 digits | The ‘m’ or ‘M’ suffix converts a literal to a decimal. |
Now, consider the following calculations that involve decimals;
C# 0.000123456789012345 + 1 = 1.0001234567890123 (double)
0.000123456789012345m + 1m = 1.000123456789012345m (decimal)
0.000123456789012345f + 1f = 1.0001235f (float)
C# 1.0 + 1.0 / 9000.0 – 1.0 = 0.00011111111111117289 (double)
1.0m + 1.0m / 9000.0m – 1.0m = 0.0001111111111111111111111111m (decimal)
1.0f + 1.0f / 9000.0f – 1.0f = 0.000111111112f (float)
C# 1.333 + 1.225 – 1.333 – 1.225 = -0.00000000000000022204460492503131 (double)
1.333m + 1.225m – 1.333m – 1.225m = 0m (decimal)
1.333f + 1.225f – 1.333f – 1.225f = 0f (float)
C# 1 + 0.000000000000001665280326829110 – 1 = 0.0000000000000015543122344752192 (double)
1m + 0.000000000000001665280326829110m – 1m = 0.0000000000000016652803268291m (decimal)
1f + 0.000000000000001665280326829110f – 1f = 0.00000000000000155431223f (float)
C# (43.1 – 43.2) + 1 = 0.89999999999999858 (double) **anomaly
(43.1m – 43.2m) + 1m = 0.9m (decimal)
(43.1f – 43.2f) + 1f = 0.8999977f (float) **anomaly
In C#, it can extend beyond 30 digits of precision. From the above sample calculations, its understood that the decimal values beyond the given data type’s precision range are superfluous.
Hence the RULE: In C#, apply Round() to minimise precision errors beyond the type’s precision range. This ‘RULE’ is applicable to all programming languages – Java, JavaScript, etc.
In C#, the Decimal data type has precision accuracy to the range of 30 digits without any loss. It requires computation power more than 100 times than the other two decimal types.
The Double data type is designed to have enough precision and to be fast, consuming least of Cpu processing. A computation with Double data type, having a precision beyond its designed precision range is a result of superfluous precision buildup that has to be ignored by Rounding-off. This is the default data type for decimal numbers in C#.
The Float data type is marginally fast processing than Double having less precision. This too suffers from superfluous precision buildup.