How to read the %21x format, part 2
In my previous posting last week, I explained how computers store binary floating-point numbers, how Stata’s %21x display format displays with fidelity those binary floating-point numbers, how %21x can help you uncover bugs, and how %21x can help you understand behaviors that are not bugs even though they are surpising to us base-10 thinkers. The point is, it is sometimes useful to think in binary, and with %21x, thinking in binary is not difficult.
This week, I want to discuss double versus float precision.
Double (8-byte) precision provides 53 binary digits. Float (4-byte) precision provides 24. Let me show you what float precision looks like.
. display %21x sqrt(2) _newline %21x float(sqrt(2)) +1.6a09e667f3bcdX+000 +1.6a09e60000000X+000
All those zeros in the floating-point result are not really there;
%21x merely padded them on. The display would be more honest if it were
+1.6a09e6 X+000
Of course, +1.6a09e60000000X+000 is a perfectly valid way of writing +1.6a09e6X+000 — just as 1.000 is a valid way of writing 1 — but you must remember that float has fewer digits than double.
Hexadecimal 1.6109e6 is a rounded version of 1.6a09e667f3bcd, and you can think of this in one of two ways:
double = float + extra precision 1.6a09e667f3bcd = 1.6a09e6 + 0.00000067f3bcd
or
float = double - lost precision 1.6a09e6 = 1.6a09e667f3bcd - 0.00000067f3bcd
Note that more digits are lost than appear in the float result! The float result provides six hexadecimal digits (ignoring the 1), and seven digits appear under the heading lost precision. Double precision is more than twice float precision. To be accurate, double precision provides 53 binary digits, float provides 24, so double precision is really 53/24 = 2.208333 precision.
The double of double precision refers to the total number of binary digits used to store the mantissa and the exponent in z=a*2^b, which is 64 versus 32. Precision is 53 versus 24.
In this case, we obtained the floating-point from float(sqrt(2)), meaning that we rounded a more accurate double-precision result. One usually rounds when producing a less precise representation. One of the rounding rules is to round up if the digits being omitted (with a decimal point in front) exceed 1/2, meaning 0.5 in decimal. The equivalent rule in base-16 is to round up if the digits being omitted (with a hexadecimal point in front) exceed 1/2, meaning 0.8 (base-16). The lost digits were .67f3bcd, which are less than 0.8, and therefore, the last digit of the rounded result was not adjusted.
Actually, rounding to float precision is more difficult than I make out, and seeing that numbers are rounded correctly when displayed in %21x can be difficult. These difficulties have to do with the relationship between base-2 — the base in which the computer works — and base-16 — a base similar but not identical to base-2 that we humans find more readable. The fact is that %21x was designed for double precision, so it only does an adequate job of showing single precision. When %21x displays a float-precision number, it shows you the exactly equal double-precision number, and that turns out to matter.
We use base-16 because it is easier to read. But why do we use base-16 and not base-15 or base-17? We use base-16 because it is an integer power of 2, the base the computer uses. One advantage of bases being powers of each other is that base conversion can be done more easily. In fact, conversion can be done almost digit by digit. Doing base conversion is usually a tedious process. Try converting 2394 (base-10) to base-11. Well, you say, 11^3=1331, and 2*11331 = 2662 > 2394, so the first digit is 1 and the remainder is 2394-1331 = 1063. Now, repeating the process with 1063, I observe that 11^2 = 121 and that 1063 is bound by 8*121=969 and 9*121=1089, so the second digit is 9 and I have a remainder of …. And eventually you produce the answer 1887 (base-11).
Converting between bases when one is the power of another not only is easier but also is so easy you can do it in your head. To convert from base-2 to base-16, group the binary digits into groups of four (because 2^4=16) and then translate each group individually.
For instance, to convert 011110100010, proceed as follows:
0111 1010 0010 -------------- 7 a 2
I’ve performed this process often enough that I hardly have to think. But here is how you should think: Divide the binary number into four-digit groups. The four columns of the binary number stand for 8, 4, 2, and 1. When you look at 0111, say to yourself 4+2+1 = 7. When you look at 1010, say to yourself 8+2 = 10, and remember that the digit for 10 in base-16 is a.
Converting back is nearly as easy:
7 a 2 -------------- 0111 1010 0010
Look at 7 and remember the binary columns 8-4-2-1. Though 7 does not contain an 8, it does contain a 4 (leaving 3), and 3 contains a 2 and a 1.
I admit that converting base-16 to base-2 is more tedious than converting base-2 to base-16, but eventually, you’ll have the four-digit binary table memorized; there are only 16 lines. Say 7 to me, and 0111 just pops into my head. Well, I’ve been doing this a long time, and anyway, I’m a geek. I suspect I carry the as-yet-undiscovered binary gene, which means I came into this world with the base-2-to-base-16 conversion table hardwired:
base-2 | base-16 |
---|---|
0000 | 0 |
0001 | 1 |
0010 | 2 |
0011 | 3 |
0100 | 4 |
… | |
1001 | 9 |
1010 | a |
… | |
1111 | f |
Now that you can convert base-2 to base-16 — convert from binary to hexadecimal — and you can convert back again, let’s return to floating-point numbers.
Remember how floating-point numbers are stored:
z = a * 2^b, 1<=a<2 or a==0
For example,
0.0 = 0.0000000000000000000000000000000000000000000000000000 * 2^-big 0.5 = 1.0000000000000000000000000000000000000000000000000000 * 2^-1 1.0 = 1.0000000000000000000000000000000000000000000000000000 * 2^0 sqrt(2) = 1.0110101000001001111001100110011111110011101111001101 * 2^0 1.5 = 1.1000000000000000000000000000000000000000000000000000 * 2^0 2.0 = 1.0000000000000000000000000000000000000000000000000000 * 2^0 2.5 = 1.0100000000000000000000000000000000000000000000000000 * 2^0 3.0 = 1.1000000000000000000000000000000000000000000000000000 * 2^1 _pi = 1.1001001000011111101101010100010001000010110100011000 * 2^1 etc.
In double precision, there are 53 binary digits of precision. One of the digits is written to the left of binary point, and the remaining 52 are written to the right. Next observe that the 52 binary digits to the right of the binary point can be written in 52/4=13 hexadecimal digits. That is exactly what %21x does:
0.0 = +0.0000000000000X-3ff 0.5 = +1.0000000000000X-001 1.0 = +1.0000000000000X+000 sqrt(2) = +1.6a09e667f3bcdX+000 1.0 = +1.0000000000000X+000 1.5 = +1.8000000000000X+000 2.0 = +1.0000000000000X+001 2.5 = +1.4000000000000X+001 3.0 = +1.8000000000000X+002 _pi = +1.921fb54442d18X+001
You could perform the binary-to-hexadecimal translation for yourself. Consider _pi. The first group of four binary digits after the binary point are 1001, and 9 appears after the binary point in the %21x result. The second group of four are 0010, and 2 appears in the %21x result.
The %21x result is an exact representation of the underlying binary, and thus you are equally entitled to think in either base.
In single precision, the rule is the same:
z = a * 2^b, 1<=a<2 or a==0
But this time, only 24 binary digits are provided for a, and so we have
0.0 = 0.00000000000000000000000 * 2^-big 0.5 = 1.00000000000000000000000 * 2^-1 1.0 = 1.00000000000000000000000 * 2^0 sqrt(2) = 1.01101010000010011110011 * 2^0 1.5 = 1.10000000000000000000000 * 2^0 2.0 = 1.00000000000000000000000 * 2^0 2.5 = 1.01000000000000000000000 * 2^0 3.0 = 1.10000000000000000000000 * 2^1 _pi = 1.10010010000111111011011 * 2^1 etc.
In single precision, there are 24-1=23 binary digits of precision to the right of the binary point, and 23 is not divisible by 4. If we tried to convert to base-16, we end up with
sqrt(2) = 1.0110 1010 0000 1001 1110 011 * 2^0 1. 6 a 0 9 e ? * 2^0
To fill in the last digit, we could recognize that we can pad on an extra 0 because we are to the right of the binary point. For example, 1.101 == 1.1010. If we padded on the extra 0, we have
sqrt(2) = 1.0110 1010 0000 1001 1110 0110 * 2^0 1. 6 a 0 9 e 6 * 2^0
That is precisely the result %21x shows us:
. display %21x float(sqrt(2)) +1.6a09e60000000X+000
although we might wish that %21x would omit the 0s that aren’t really there, and instead display this as +1.6a09e6X+000.
The problem with this solution is that it can be misleading because the last digit looks like it contains four binary digits when in fact it contains only three. To show how easily you can be misled, look at _pi in double and float precisions:
. display %21x _pi _newline %21x float(_pi) +1.921fb54442d18X+001 +1.921fb60000000X+001 ^ digit incorrectly rounded?
The computer rounded the last digit up from 5 to 6. The digits after the rounded-up digit in the full-precision result, however, are 0.4442d18, and are clearly less than 0.8 (1/2). Shouldn’t the rounded result be 1.921fb5X+001? The answer is that yes, 1.921fb5X+001 would be a better result if we had 6*4=24 binary digits to the right of the binary point. But we have only 23 digits; correctly rounding to 23 binary digits and then translating into base-16 results in 1.921fb6X+001. Because of the missing binary digit, the last base-16 digit can only take on the values 0, 2, 4, 6, 8, a, c, and e.
The computer performs the rounding in binary. Look at the relevant piece of this double-precision number in binary:
+1.921f b 5 4 4 42d18X+001 number 1011 0101 0100 0100 0100 expansion into binary 1011 01?x xxxx xxxx xxxxxxxx thinking about rounding 1011 011x xxxx xxxx xxxxxxxx performing rounding +1.921f b 6 X+001 convert to base-16
The part I have converted to binary in the second line is around the part to be rounded. In the third line, I’ve put x’s under the part we will have to discard to round this double into a float. The x’d out part — 10100… — is clearly greater than 1/2, so the last digit (where I put a question mark) must be rounded up. Thus, _pi in float precision rounds to 1.921fb6+X001, just as the computer said.
Float precision does not play much of a role in Stata despite the fact that most users store their data as floats. Regardless of how data are stored, Stata makes all calculations in double precision, and float provides more than enough precision for most data applications. The U.S. deficit in 2011 is projected to be $1.5 trillion. One hopes that a grand total of $26,624 — the error that would be introduced by storing this projected deficit in float precision — would not be a significant factor in any lawmaker’s decision concerning the issue. People in the U.S. are said to work about 40 hours per week, or roughly 0.238 of the hours in a week. I doubt that number is accurate to 0.4 milliseconds, the error that float would introduce in recording the fraction. A cancer survivor might live 350.1 days after a treatment, but we would introduce an error of roughly 1/2 second if we record the number as a float. One might question whether the instant of death could even conceptually be determined that accurately. The moon is said to be 384.401 thousand kilometers from the Earth. Record in 1,000s of kilometers in float, and the error is almost 1 meter. At its closest and farthest, the moon is 356,400 and 406,700 kilometers away. Most fundamental constants of the universe are known only to a few parts in a million, which is to say, to less than float precision, although we do know the speed of light in a vacuum to one decimal digit beyond float accuracy; it’s 299,793.458 kilometers per second. Round that to float and you’ll be off by 0.01 km/s.
The largest integer that can be recorded without rounding in float precision is 16,777,215. The largest integer that can be recorded without rounding in double precision is 9,007,199,254,740,991.
People working with dollar-and-cent data in Stata usually find it best to use doubles both to avoid rounding issues and in case the total exceeds $167,772.15. Rounding issues of 0.01, 0.02, etc., are inherent when working with binary floating point, regardless of precision. To avoid all problems, these people should use doubles and record amounts in pennies. That will have no difficulty with sums up to $90,071,992,547,409.91, which is to say, about $90 trillion. That’s nine quadrillion pennies. In my childhood, I thought a quadrillion just meant a lot, but it has a formal definition.
All of which is a long way from where I started, but now you are an expert in understanding binary floating-point numbers the way a scientific programmer needs to understand them: z=a*2^b. You are nearly all the way to understanding the IEEE 754-2008 standard. That standard merely states how a and b are packed into 32 and 64 bits, and the entire point of %21x is to avoid those details because, packed together, the numbers are unreadable by humans.
References
Cox, N. J. 2006. Tip 33: Sweet sixteen: Hexadecimal formats and precision problems. Stata Journal 6: 282-283.
Gould, William. 2006. Mata matters: Precision. Stata Journal 6: 550-560.
Linhart, J. M. 2008. Mata matters: Overflow and IEEE floating-point format. Stata Journal 8: 255-268.