Archive

Posts Tagged ‘format’

The Penultimate Guide to Precision

There have recently been occasional questions on precision and storage types on Statalist despite all that I have written on the subject, much of it posted in this blog. I take that as evidence that I have yet to produce a useful, readable piece that addresses all the questions researchers have.

So I want to try again. This time I’ll try to write the ultimate piece on the subject, making it as short and snappy as possible, and addressing every popular question of which I am aware — including some I haven’t addressed before — and doing all that without making you wade with me into all the messy details, which I know I have a tendency to do.

I am hopeful that from now on, every question that appears on Statalist that even remotely touches on the subject will be answered with a link back to this page. If I succeed, I will place this in the Stata manuals and get it indexed online in Stata so that users can find it the instant they have questions.

What follows is intended to provide everything scientific researchers need to know to judge the effect of storage precision on their work, to know what can go wrong, and to prevent that. I don’t want to raise expectations too much, however, so I will entitle it …

THE PENULTIMATE GUIDE TO PRECISION

  1. Contents

     1. Numeric types
    2. Floating-point types
    3. Integer types
    4. Integer precision
    5. Floating-point precision
    6. Advice concerning 0.1, 0.2, …
    7. Advice concerning exact data, such as currency data
    8. Advice for programmers
    9. How to interpret %21x format (if you care)
    10. Also see

  2. Numeric types

    1.1 Stata provides five numeric types for storing variables, three of them integer types and two of them floating point.

    1.2 The floating-point types are float and double.

    1.3 The integer types are byte, int, and long.

    1.4 Stata uses these five types for the storage of data.

    1.5 Stata makes all calculations in double precision (and sometimes quad precision) regardless of the type used to store the data.

  3. Floating-point types

    2.1 Stata provides two IEEE 754-2008 floating-point types: float and double.

    2.2 float variables are stored in 4 bytes.

    2.3 double variables are stored in 8 bytes.

    2.4 The ranges of float and double variables are

         Storage
         type             minimum                maximum
         -----------------------------------------------------
         float     -3.40282346639e+ 38      1.70141173319e+ 38
         double    -1.79769313486e+308      8.98846567431e+307
         -----------------------------------------------------
         In addition, float and double can record missing values 
         ., .a, .b, ..., .z.

    The above values are approximations. For those familiar with %21x floating-point hexadecimal format, the exact values are

         Storage
         type                   minimum                maximum
         ------------------------------------------------------- 
         float   -1.fffffe0000000X+07f     +1.fffffe0000000X+07e 
         double  -1.fffffffffffffX+3ff     +1.fffffffffffffX+3fe
         -------------------------------------------------------

    Said differently, and less precisely, float values are in the open interval (-2128, 2127), and double values are in the open interval (-21024, 21023). This is less precise because the intervals shown in the tables are closed intervals.

  4. Integer types

    3.1 Stata provides three integer storage formats: byte, int, and long. They are 1 byte, 2 bytes, and 4 bytes, respectively.

    3.2 Integers may also be stored in Stata’s IEEE 754-2008 floating-point storage formats float and double.

    3.3 Integer values may be stored precisely over the ranges

         storage
         type                   minimum                 maximum
         ------------------------------------------------------
         byte                      -127                     100
         int                    -32,767                  32,740
         long            -2,147,483,647           2,147,483,620
         ------------------------------------------------------
         float              -16,777,216              16,777,216
         double  -9,007,199,254,740,992   9,007,199,254,740,992
         ------------------------------------------------------
         In addition, all storage types can record missing values
         ., .a, .b, ..., .z.

    The overall ranges of float and double were shown in (2.4) and are wider than the ranges for them shown here. The ranges shown here are the subsets of the overall ranges over which no rounding of integer values occurs.

  5. Integer precision

    4.1 (Automatic promotion.) For the integer storage types — for byte, int, and long — numbers outside the ranges listed in (3.3) would be stored as missing (.) except that storage types are promoted automatically. As necessary, Stata promotes bytes to ints, ints to longs, and longs to doubles. Even if a variable is a byte, the effective range is still [-9,007,199,254,740,992, 9,007,199,254,740,992] in the sense that you could change a value of a byte variable to a large value and that value would be stored correctly; the variable that was a byte would, as if by magic, change its type to int, long, or double if that were necessary.

    4.2 (Data input.) Automatic promotion (4.1) applies after the data are input/read/imported/copied into Stata. When first reading, importing, copying, or creating data, it is your responsibility to choose appropriate storage types. Be aware that Stata’s default storage type is float, so if you have large integers, it is usually necessary to specify explicitly the types you wish to use.

    If you are unsure of the type to specify for your integer variables, specify double. After reading the data, you can use compress to demote storage types. compress never results in a loss of precision.

    4.3 Note that you can use the floating-point types float and double to store integer data.

    4.3.1 Integers outside the range [-2,147,483,647, 2,147,483,620] must be stored as doubles if they are to be precisely recorded.

    4.3.2 Integers can be stored as float, but avoid doing that unless you are certain they will be inside the range [-16,777,216, 16,777,216] not just when you initially read, import, or copy them into Stata, but subsequently as you make transformations.

    4.3.3 If you read your integer data as floats, and assuming they are within the allowed range, we recommend that you change them to an integer type. You can do that simply by typing compress. We make that recommendation so that your integer variables will benefit from the automatic promotion described in (4.1).

    4.4 Let us show what can go wrong if you do not follow our advice in (4.3). For the floating-point types — for float and double — integer values outside the ranges listed in (3.3) are rounded.

    Consider a float variable, and remember that the integer range for floats is [-16,777,216, 16,777,216]. If you tried to store a value outside the range in the variable — say, 16,777,221 — and if you checked afterward, you would discover that actually stored was 16,777,220! Here are some other examples of rounding:

         desired value                            stored (rounded)
         to store            true value             float value 
         ------------------------------------------------------
         maximum             16,777,216              16,777,216 
         maximum+1           16,777,217              16,777,216
         ------------------------------------------------------
         maximum+2           16,777,218              16,777,218
         ------------------------------------------------------
         maximum+3           16,777,219              16,777,220
         maximum+4           16,777,220              16,777,220
         maximum+5           16,777,221              16,777,220
         ------------------------------------------------------
         maximum+6           16,777,222              16,777,222
         ------------------------------------------------------
         maximum+7           16,777,223              16,777,224
         maximum+8           16,777,224              16,777,224
         maximum+9           16,777,225              16,777,224
         ------------------------------------------------------
         maximum+10          16,777,226              16,777,226
         ------------------------------------------------------

    When you store large integers in float variables, values will be rounded and no mention will be made of that fact.

    And that is why we say that if you have integer data that must be recorded precisely and if the values might be large — outside the range ±16,777,216 — do not use float. Use long or use double; or just use the compress command and let automatic promotion handle the problem for you.

    4.5 Unlike byte, int, and long, float and double variables are not promoted to preserve integer precision.

    Float values are not promoted because, well, they are not. Actually, there is a deep reason, but it has to do with the use of float variables for their real purpose, which is to store non-integer values.

    Double values are not promoted because there is nothing to promote them to. Double is Stata’s most precise storage type. The largest integer value Stata can store precisely is 9,007,199,254,740,992 and the smallest is -9,007,199,254,740,992.

    Integer values outside the range for doubles round in the same way that float values round, except at absolutely larger values.

  6. Floating-point precision

    5.1 The smallest, nonzero value that can be stored in float and double is

         Storage
         type      value          value in %21x         value in base 10
         -----------------------------------------------------------------
         float     ±2^-127    ±1.0000000000000X-07f   ±5.877471754111e-039
         double    ±2^-1022   ±1.0000000000000X-3fe   ±2.225073858507e-308
         -----------------------------------------------------------------

    We include the value shown in the third column, the value in %21x, for those who know how to read it. It is described in (9), but it is unimportant. We are merely emphasizing that these are the smallest values for properly normalized numbers.

    5.2 The smallest value of epsilon such that 1+epsilon ≠ 1 is

         Storage
         type      epsilon       epsilon in %21x        epsilon in base 10
         -----------------------------------------------------------------
         float      ±2^-23     ±1.0000000000000X-017    ±1.19209289551e-07
         double     ±2^-52     ±1.0000000000000X-034    ±2.22044604925e-16
         -----------------------------------------------------------------

    Epsilon is the distance from 1 to the next number on the floating-point number line. The corresponding unit roundoff error is u = ±epsilon/2. The unit roundoff error is the maximum relative roundoff error that is introduced by the floating-point number storage scheme.

    The smallest value of epsilon such that x+epsilon ≠ x is approximately |x|*epsilon, and the corresponding unit roundoff error is ±|x|*epsilon/2.

    5.3 The precision of the floating-point types is, depending on how you want to measure it,

         Measurement                           float              double
         ----------------------------------------------------------------
         # of binary digits                       23                  52
         # of base 10 digits (approximate)         7                  16 
    
         Relative precision                   ±2^-24              ±2^-53
         ... in base 10 (approximate)      ±5.96e-08           ±1.11e-16
         ----------------------------------------------------------------

    Relative precision is defined as

                           |x - x_as_stored|
                  ± max   ------------------    
                     x            x

    performed using infinite precision arithmetic, x chosen from the subset of reals between the minimum and maximum values that can be stored. It is worth appreciating that relative precision is a worst-case relative error over all possible numbers that can be stored. Relative precision is identical to roundoff error, but perhaps this definition is easier to appreciate.

    5.4 Stata never makes calculations in float precision, even if the data are stored as float.

    Stata makes double-precision calculations regardless of how the numeric data are stored. In some cases, Stata internally uses quad precision, which provides approximately 32 decimal digits of precision. If the result of the calculation is being stored back into a variable in the dataset, then the double (or quad) result is rounded as necessary to be stored.

    5.5 (False precision.) Double precision is 536,870,912 times more accurate than float precision. You may worry that float precision is inadequate to accurately record your data.

    Little in this world is measured to a relative accuracy of ±2-24, the accuracy provided by float precision.

    Ms. Smith, it is reported, made $112,293 this year. Do you believe that is recorded to an accuracy of ±2-24*112,293, or approximately ±0.7 cents?

    David was born on 21jan1952, so on 27mar2012 he was 21,981 days old, or 60.18 years old. Recorded in float precision, the precision is ±60.18*2-24, or roughly ±1.89 minutes.

    Joe reported that he drives 12,234 miles per year. Do you believe that Joe’s report is accurate to ±12,234*2-24, equivalent to ±3.85 feet?

    A sample of 102,400 people reported that they drove, in total, 1,252,761,600 miles last year. Is that accurate to ±74.7 miles (float precision)? If it is, each of them is reporting with an accuracy of roughly ±3.85 feet.

    The distance from the Earth to the moon is often reported as 384,401 kilometers. Recorded as a float, the precision is ±384,401*2-24, or ±23 meters, or ±0.023 kilometers. Because the number was not reported as 384,401.000, one would assume float precision would be accurate to record that result. In fact, float precision is more than sufficiently accurate to record the distance because the distance from the Earth to the moon varies from 356,400 to 406,700 kilometers, some 50,300 kilometers. The distance would have been better reported as 384,401 ±25,150 kilometers. At best, the measurement 384,401 has relative accuracy of ±0.033 (it is accurate to roughly two digits).

    Nonetheless, a few things have been measured with more than float accuracy, and they stand out as crowning accomplishments of mankind. Use double as required.

  7. Advice concerning 0.1, 0.2, …

    6.1 Stata uses base 2, binary. Popular numbers such as 0.1, 0.2, 100.21, and so on, have no exact binary representation in a finite number of binary digits. There are a few exceptions, such as 0.5 and 0.25, but not many.

    6.2 If you create a float variable containing 1.1 and list it, it will list as 1.1 but that is only because Stata’s default display format is %9.0g. If you changed that format to %16.0g, the result would appear as 1.1000000238419.

    This scares some users. If this scares you, go back and read (5.5) False Precision. The relative error is still a modest ±2-24. The number 1.1000000238419 is likely a perfectly acceptable approximation to 1.1 because the 1.1 was never measured to an accuracy of less than ±2-24 anyway.

    6.3 One reason perfectly acceptable approximations to 1.1 such as 1.1000000238419 may bother you is that you cannot select observations containing 1.1 by typing if x==1.1 if x is a float variable. You cannot because the 1.1 on the right is interpreted as double precision 1.1. To select the observations, you have to type if x==float(1.1).

    6.4 If this bothers you, record the data as doubles. It is best to do this at the point when you read the original data or when you make the original calculation. The number will then appear to be 1.1. It will not really be 1.1, but it will have less relative error, namely, ±2-53.

    6.5 If you originally read the data and stored them as floats, it is still sometimes possible to recover the double-precision accuracy just as if you had originally read the data into doubles. You can do this if you know how many decimal digits were recorded after the decimal point and if the values are within a certain range.

    If there was one digit after the decimal point and if the data are in the range [-1,048,576, 1,048,576], which means the values could be -1,048,576, -1,048,575.9, …, -1, 0, 1, …, 1,048,575.9, 1,048,576, then typing

    . gen double y = round(x*10)/10

    will recover the full double-precision result. Stored in y will be the number in double precision just as if you had originally read it that way.

    It is not possible, however, to recover the original result if x is outside the range ±1,048,576 because the float variable contains too little information.

    You can do something similar when there are two, three, or more decimal digits:

         # digits to
         right of 
         decimal pt.   range     command
         -----------------------------------------------------------------
             1      ±1,048,576   gen double y = round(x*10)/10
             2      ±  131,072   gen double y = round(x*100)/100
             3      ±   16,384   gen double y = round(x*1000)/1000
             4      ±    1,024   gen double y = round(x*10000)/10000
             5      ±      128   gen double y = round(x*100000)/100000
             6      ±       16   gen double y = round(x*1000000)/1000000
             7      ±        1   gen double y = round(x*10000000)/10000000
         -----------------------------------------------------------------

    Range is the range of x over which command will produce correct results. For instance, range = ±16 in the next-to-the-last line means that the values recorded in x must be -16 ≤ x ≤ 16.

  8. Advice concerning exact data, such as currency data

    7.1 Yes, there are exact data in this world. Such data are usually counts of something or are currency data, which you can think of as counts of pennies ($0.01) or the smallest unit in whatever currency you are using.

    7.2 Just because the data are exact does not mean you need exact answers. It may still be that calculated answers are adequate if the data are recorded to a relative accuracy of ±2-24 (float). For most analyses — even of currency data — this is often adequate. The U.S. deficit in 2011 was $1.5 trillion. Stored as a float, this amount has a (maximum) error of ±2-24*1.5e+12 = ±$89,406.97. It would be difficult to imagine that ±$89,406.97 would affect any government decision maker dealing with the full $1.5 trillion.

    7.3 That said, you sometimes do need to make exact calculations. Banks tracking their accounts need exact amounts. It is not enough to say to account holders that we have your money within a few pennies, dollars, or hundreds of dollars.

    In that case, the currency data should be converted to integers (pennies) and stored as integers, and then processed as described in (4). Assuming the dollar-and-cent amounts were read into doubles, you can convert them into pennies by typing

    . replace x = x*100

    7.4 If you mistakenly read the currency data as a float, you do not have to re-read the data if the dollar amounts are between ±$131,072. You can type

    . gen double x_in_pennies = round(x*100)

    This works only if x is between ±131,072.

  9. Advice for programmers

    8.1 Stata does all calculations in double (and sometimes quad) precision.

    Float precision may be adequate for recording most data, but float precision is inadequate for performing calculations. That is why Stata does all calculations in double precision. Float precision is also inadequate for storing the results of intermediate calculations.

    There is only one situation in which you need to exercise caution — if you create variables in the data containing intermediate results. Be sure to create all such variables as doubles.

    8.2 The same quad-precision routines StataCorp uses are available to you in Mata; see the manual entries [M-5] mean, [M-5] sum, [M-5] runningsum, and [M-5] quadcross. Use them as you judge necessary.

  10. How to interpret %21x format (if you care)

    9.1 Stata has a display format that will display IEEE 754-2008 floating-point numbers in their full binary glory but in a readable way. You probably do not care; if so, skip this section.

    9.2 IEEE 754-2008 floating-point numbers are stored as a pair of numbers (a, b) that are given the interpretation

    z = a * 2b

    where -2 < a < 2. In double precision, a is recorded with 52 binary digits. In float precision, a is recorded with 23 binary digits. For example, the number 2 is recorded in double precision as

    a = +1.0000000000000000000000000000000000000000000000000000
    b = +1

    The value of pi is recorded as

    a = +1.1001001000011111101101010100010001000010110100011000
    b = +1

    9.3 %21x presents a and b in base 16. The double-precision value of 2 is shown in %21x format as

    +1.0000000000000X+001

    and the value of pi is shown as

    +1.921fb54442d18X+001

    In the case of pi, the interpretation is

    a = +1.921fb54442d18 (base 16)
    b = +001             (base 16)

    Reading this requires practice. It helps to remember that one-half corresponds to 0.8 (base 16). Thus, we can see that a is slightly larger than 1.5 (base 10) and b = 1 (base 10), so _pi is something over 1.5*21 = 3.

    The number 100,000 in %21x is

    +1.86a0000000000X+010

    which is to say

    a = +1.86a0000000000 (base 16)
    b = +010             (base 16)

    We see that a is slightly over 1.5 (base 10), and b is 16 (base 10), so 100,000 is something over 1.5*216 = 98,304.

    9.4 %21x faithfully presents how the computer thinks of the number. For instance, we can easily see that the nice number 1.1 (base 10) is, in binary, a number with many digits to the right of the binary point:

    . display %21x 1.1
    +1.199999999999aX+000

    We can also see why 1.1 stored as a float is different from 1.1 stored as a double:

    . display %21x float(1.1)
    +1.19999a0000000X+000

    Float precision assigns fewer digits to the mantissa than does double precision, and 1.1 (base 10) in base 16 is a repeating hexadecimal.

    9.5 %21x can be used as an input format as well as an output format. For instance, Stata understands

    . gen x = 1.86ax+10

    Stored in x will be 100,000 (base 10).

    9.6 StataCorp has seen too many competent scientific programmers who, needing a perturbance for later use in their program, code something like

    epsilon = 1e-8

    It is worth examining that number:

    . display %21x 1e-8
    +1.5798ee2308c3aX-01b

    That is an ugly number that can only lead to the introduction of roundoff error in their program. A far better number would be

    epsilon = 1.0x-1b

    Stata and Mata understand the above statement because %21x may be used as input as well as output. Naturally, 1.0x-1b looks just like what it is,

    . display %21x 1.0x-1b
    +1.0000000000000X-01b

    and all those pretty zeros will reduce numerical roundoff error.

    In base 10, the pretty 1.0x-1b looks like

    . display %20.0g 1.0x-1b
    7.4505805969238e-09

    and that number may not look pretty to you, but you are not a base-2 digital computer.

    Perhaps the programmer feels that epsilon really needs to be closer to 1e-8. In %21x, we see that 1e-8 is +1.5798ee2308c3aX-01b, so if we want to get closer, perhaps we use

    epsilon = 1.6x-1b

    9.7 %21x was invented by StataCorp.

  11. Also see

    If you wish to learn more, see

    How to read the %21x format

    How to read the %21x format, part 2

    Precision (yet again), Part I

    Precision (yet again), Part II

How to read the %21x format, part 2

In my previous posting last week, I explained how computers store binary floating-point numbers, how Stata’s %21x display format displays with fidelity those binary floating-point numbers, how %21x can help you uncover bugs, and how %21x can help you understand behaviors that are not bugs even though they are surpising to us base-10 thinkers. The point is, it is sometimes useful to think in binary, and with %21x, thinking in binary is not difficult.

This week, I want to discuss double versus float precision.

Double (8-byte) precision provides 53 binary digits. Float (4-byte) precision provides 24. Let me show you what float precision looks like.

. display %21x sqrt(2) _newline %21x float(sqrt(2))
+1.6a09e667f3bcdX+000
+1.6a09e60000000X+000

All those zeros in the floating-point result are not really there;
%21x merely padded them on. The display would be more honest if it were

+1.6a09e6       X+000

Of course, +1.6a09e60000000X+000 is a perfectly valid way of writing +1.6a09e6X+000 — just as 1.000 is a valid way of writing 1 — but you must remember that float has fewer digits than double.

Hexadecimal 1.6109e6 is a rounded version of 1.6a09e667f3bcd, and you can think of this in one of two ways:

     double     =  float   + extra precision
1.6a09e667f3bcd = 1.6a09e6 + 0.00000067f3bcd

or

  float   =      double     -  lost precision
1.6a09e6  = 1.6a09e667f3bcd - 0.00000067f3bcd

Note that more digits are lost than appear in the float result! The float result provides six hexadecimal digits (ignoring the 1), and seven digits appear under the heading lost precision. Double precision is more than twice float precision. To be accurate, double precision provides 53 binary digits, float provides 24, so double precision is really 53/24 = 2.208333 precision.

The double of double precision refers to the total number of binary digits used to store the mantissa and the exponent in z=a*2^b, which is 64 versus 32. Precision is 53 versus 24.

In this case, we obtained the floating-point from float(sqrt(2)), meaning that we rounded a more accurate double-precision result. One usually rounds when producing a less precise representation. One of the rounding rules is to round up if the digits being omitted (with a decimal point in front) exceed 1/2, meaning 0.5 in decimal. The equivalent rule in base-16 is to round up if the digits being omitted (with a hexadecimal point in front) exceed 1/2, meaning 0.8 (base-16). The lost digits were .67f3bcd, which are less than 0.8, and therefore, the last digit of the rounded result was not adjusted.

Actually, rounding to float precision is more difficult than I make out, and seeing that numbers are rounded correctly when displayed in %21x can be difficult. These difficulties have to do with the relationship between base-2 — the base in which the computer works — and base-16 — a base similar but not identical to base-2 that we humans find more readable. The fact is that %21x was designed for double precision, so it only does an adequate job of showing single precision. When %21x displays a float-precision number, it shows you the exactly equal double-precision number, and that turns out to matter.

We use base-16 because it is easier to read. But why do we use base-16 and not base-15 or base-17? We use base-16 because it is an integer power of 2, the base the computer uses. One advantage of bases being powers of each other is that base conversion can be done more easily. In fact, conversion can be done almost digit by digit. Doing base conversion is usually a tedious process. Try converting 2394 (base-10) to base-11. Well, you say, 11^3=1331, and 2*11331 = 2662 > 2394, so the first digit is 1 and the remainder is 2394-1331 = 1063. Now, repeating the process with 1063, I observe that 11^2 = 121 and that 1063 is bound by 8*121=969 and 9*121=1089, so the second digit is 9 and I have a remainder of …. And eventually you produce the answer 1887 (base-11).

Converting between bases when one is the power of another not only is easier but also is so easy you can do it in your head. To convert from base-2 to base-16, group the binary digits into groups of four (because 2^4=16) and then translate each group individually.

For instance, to convert 011110100010, proceed as follows:

0111 1010 0010
--------------
   7    a    2

I’ve performed this process often enough that I hardly have to think. But here is how you should think: Divide the binary number into four-digit groups. The four columns of the binary number stand for 8, 4, 2, and 1. When you look at 0111, say to yourself 4+2+1 = 7. When you look at 1010, say to yourself 8+2 = 10, and remember that the digit for 10 in base-16 is a.

Converting back is nearly as easy:

   7    a    2
--------------
0111 1010 0010

Look at 7 and remember the binary columns 8-4-2-1. Though 7 does not contain an 8, it does contain a 4 (leaving 3), and 3 contains a 2 and a 1.

I admit that converting base-16 to base-2 is more tedious than converting base-2 to base-16, but eventually, you’ll have the four-digit binary table memorized; there are only 16 lines. Say 7 to me, and 0111 just pops into my head. Well, I’ve been doing this a long time, and anyway, I’m a geek. I suspect I carry the as-yet-undiscovered binary gene, which means I came into this world with the base-2-to-base-16 conversion table hardwired:

base-2 base-16
0000 0
0001 1
0010 2
0011 3
0100 4
1001 9
1010 a
1111 f

Now that you can convert base-2 to base-16 — convert from binary to hexadecimal — and you can convert back again, let’s return to floating-point numbers.

Remember how floating-point numbers are stored:

z = a * 2^b, 1<=a<2 or a==0

For example,

    0.0 = 0.0000000000000000000000000000000000000000000000000000 * 2^-big
    0.5 = 1.0000000000000000000000000000000000000000000000000000 * 2^-1
    1.0 = 1.0000000000000000000000000000000000000000000000000000 * 2^0
sqrt(2) = 1.0110101000001001111001100110011111110011101111001101 * 2^0
    1.5 = 1.1000000000000000000000000000000000000000000000000000 * 2^0
    2.0 = 1.0000000000000000000000000000000000000000000000000000 * 2^0
    2.5 = 1.0100000000000000000000000000000000000000000000000000 * 2^0
    3.0 = 1.1000000000000000000000000000000000000000000000000000 * 2^1
    _pi = 1.1001001000011111101101010100010001000010110100011000 * 2^1
    etc.

In double precision, there are 53 binary digits of precision. One of the digits is written to the left of binary point, and the remaining 52 are written to the right. Next observe that the 52 binary digits to the right of the binary point can be written in 52/4=13 hexadecimal digits. That is exactly what %21x does:

    0.0 = +0.0000000000000X-3ff
    0.5 = +1.0000000000000X-001
    1.0 = +1.0000000000000X+000
sqrt(2) = +1.6a09e667f3bcdX+000
    1.0 = +1.0000000000000X+000
    1.5 = +1.8000000000000X+000
    2.0 = +1.0000000000000X+001
    2.5 = +1.4000000000000X+001
    3.0 = +1.8000000000000X+002
    _pi = +1.921fb54442d18X+001

You could perform the binary-to-hexadecimal translation for yourself. Consider _pi. The first group of four binary digits after the binary point are 1001, and 9 appears after the binary point in the %21x result. The second group of four are 0010, and 2 appears in the %21x result.
The %21x result is an exact representation of the underlying binary, and thus you are equally entitled to think in either base.

In single precision, the rule is the same:

z = a * 2^b, 1<=a<2 or a==0

But this time, only 24 binary digits are provided for a, and so we have

    0.0 = 0.00000000000000000000000 * 2^-big
    0.5 = 1.00000000000000000000000 * 2^-1
    1.0 = 1.00000000000000000000000 * 2^0
sqrt(2) = 1.01101010000010011110011 * 2^0
    1.5 = 1.10000000000000000000000 * 2^0
    2.0 = 1.00000000000000000000000 * 2^0 
    2.5 = 1.01000000000000000000000 * 2^0
    3.0 = 1.10000000000000000000000 * 2^1
    _pi = 1.10010010000111111011011 * 2^1
    etc.

In single precision, there are 24-1=23 binary digits of precision to the right of the binary point, and 23 is not divisible by 4. If we tried to convert to base-16, we end up with

sqrt(2) = 1.0110 1010 0000 1001 1110 011   * 2^0
          1.   6    a    0    9    e    ?  * 2^0

To fill in the last digit, we could recognize that we can pad on an extra 0 because we are to the right of the binary point. For example, 1.101 == 1.1010. If we padded on the extra 0, we have

sqrt(2) = 1.0110 1010 0000 1001 1110 0110  * 2^0
          1.   6    a    0    9    e    6  * 2^0

That is precisely the result %21x shows us:

. display %21x float(sqrt(2))
+1.6a09e60000000X+000

although we might wish that %21x would omit the 0s that aren’t really there, and instead display this as +1.6a09e6X+000.

The problem with this solution is that it can be misleading because the last digit looks like it contains four binary digits when in fact it contains only three. To show how easily you can be misled, look at _pi in double and float precisions:

. display %21x _pi _newline %21x float(_pi)
+1.921fb54442d18X+001
+1.921fb60000000X+001
        ^
  digit incorrectly rounded?

The computer rounded the last digit up from 5 to 6. The digits after the rounded-up digit in the full-precision result, however, are 0.4442d18, and are clearly less than 0.8 (1/2). Shouldn’t the rounded result be 1.921fb5X+001? The answer is that yes, 1.921fb5X+001 would be a better result if we had 6*4=24 binary digits to the right of the binary point. But we have only 23 digits; correctly rounding to 23 binary digits and then translating into base-16 results in 1.921fb6X+001. Because of the missing binary digit, the last base-16 digit can only take on the values 0, 2, 4, 6, 8, a, c, and e.

The computer performs the rounding in binary. Look at the relevant piece of this double-precision number in binary:

+1.921f   b    5    4    4    42d18X+001      number
       1011 0101 0100 0100 0100               expansion into binary
       1011 01?x xxxx xxxx xxxxxxxx           thinking about rounding
       1011 011x xxxx xxxx xxxxxxxx           performing rounding
+1.921f   b    6                   X+001      convert to base-16

The part I have converted to binary in the second line is around the part to be rounded. In the third line, I’ve put x’s under the part we will have to discard to round this double into a float. The x’d out part — 10100… — is clearly greater than 1/2, so the last digit (where I put a question mark) must be rounded up. Thus, _pi in float precision rounds to 1.921fb6+X001, just as the computer said.

Float precision does not play much of a role in Stata despite the fact that most users store their data as floats. Regardless of how data are stored, Stata makes all calculations in double precision, and float provides more than enough precision for most data applications. The U.S. deficit in 2011 is projected to be $1.5 trillion. One hopes that a grand total of $26,624 — the error that would be introduced by storing this projected deficit in float precision — would not be a significant factor in any lawmaker’s decision concerning the issue. People in the U.S. are said to work about 40 hours per week, or roughly 0.238 of the hours in a week. I doubt that number is accurate to 0.4 milliseconds, the error that float would introduce in recording the fraction. A cancer survivor might live 350.1 days after a treatment, but we would introduce an error of roughly 1/2 second if we record the number as a float. One might question whether the instant of death could even conceptually be determined that accurately. The moon is said to be 384.401 thousand kilometers from the Earth. Record in 1,000s of kilometers in float, and the error is almost 1 meter. At its closest and farthest, the moon is 356,400 and 406,700 kilometers away. Most fundamental constants of the universe are known only to a few parts in a million, which is to say, to less than float precision, although we do know the speed of light in a vacuum to one decimal digit beyond float accuracy; it’s 299,793.458 kilometers per second. Round that to float and you’ll be off by 0.01 km/s.

The largest integer that can be recorded without rounding in float precision is 16,777,215. The largest integer that can be recorded without rounding in double precision is 9,007,199,254,740,991.

People working with dollar-and-cent data in Stata usually find it best to use doubles both to avoid rounding issues and in case the total exceeds $167,772.15. Rounding issues of 0.01, 0.02, etc., are inherent when working with binary floating point, regardless of precision. To avoid all problems, these people should use doubles and record amounts in pennies. That will have no difficulty with sums up to $90,071,992,547,409.91, which is to say, about $90 trillion. That’s nine quadrillion pennies. In my childhood, I thought a quadrillion just meant a lot, but it has a formal definition.

All of which is a long way from where I started, but now you are an expert in understanding binary floating-point numbers the way a scientific programmer needs to understand them: z=a*2^b. You are nearly all the way to understanding the IEEE 754-2008 standard. That standard merely states how a and b are packed into 32 and 64 bits, and the entire point of %21x is to avoid those details because, packed together, the numbers are unreadable by humans.

References

Cox, N. J. 2006. Tip 33: Sweet sixteen: Hexadecimal formats and precision problems. Stata Journal 6: 282-283.

Gould, William. 2006. Mata matters: Precision. Stata Journal 6: 550-560.

Linhart, J. M. 2008. Mata matters: Overflow and IEEE floating-point format. Stata Journal 8: 255-268.

How to read the %21x format

%21x is a Stata display format, just as are %f, %g, %9.2f, %td, and so on. You could put %21x on any variable in your dataset, but that is not its purpose. Rather, %21x is for use with Stata’s display command for those wanting to better understand the accuracy of the calculations they make. We use %21x frequently in developing Stata.

%21x produces output that looks like this:

. display %21x 1
+1.0000000000000X+000

. display %21x 2
+1.0000000000000X+001

. display %21x 10
+1.4000000000000X+003

. display %21x sqrt(2)
+1.6a09e667f3bcdX+000

All right, I admit that the result is pretty unreadable to the uninitiated. The purpose of %21x is to show floating-point numbers exactly as the computer stores them and thinks about them. In %21x’s defense, it is more readable than how the computer really records floating-point numbers, yet it loses none of the mathematical essence. Computers really record floating-point numbers like this:

      1 = 3ff0000000000000
      2 = 4000000000000000
     10 = 4024000000000000
sqrt(2) = 3ff6a09e667f3bcd

Or more correctly, they record floating-point numbers in binary, like this:

      1 = 0011111111110000000000000000000000000000000000000000000000000000
      2 = 0100000000000000000000000000000000000000000000000000000000000000
     10 = 0100000000100100000000000000000000000000000000000000000000000000
sqrt(2) = 0011111111110110101000001001111001100110011111110011101111001101

By comparison, %21x is a model of clarity.

The above numbers are 8-byte floating point, also known as double precision, encoded in binary64 IEEE 754-2008 little endian format. Little endian means that the bytes are ordered, left to right, from least significant to most significant. Some computers store floating-point numbers in big endian format — bytes ordered from most significant to least significant — and then numbers look like this:

      1 = 000000000000f03f
      2 = 0000000000000040
     10 = 0000000000004024
sqrt(2) = cd3b7f669ea0f63f

or:

      1 = 0000000000000000000000000000000000000000000000001111000000111111
      2 = 0000000000000000000000000000000000000000000000000000000000000100
     10 = 0000000000000000000000000000000000000000000000000100000000100100
sqrt(2) = 1100110100111011011111110110011010011110000011111111011000111111

Regardless of that, %21x produces the same output:

. display %21x 1
+1.0000000000000X+000

. display %21x 2
+1.0000000000000X+001

. display %21x 10
+1.4000000000000X+003

. display %21x sqrt(2)
+1.6a09e667f3bcdX+000

Binary computers store floating-point numbers as a number pair, (a, b); the desired number z is encoded

z = a * 2^b

For example,

 1 = 1.00 * 2^0
 2 = 1.00 * 2^1
10 = 1.25 * 2^3

The number pairs are encrypted in the bit patterns, such as 00111111…01, above.

I’ve written the components a and b in decimal, but for reasons that will become clear, we need to preserve the essential binaryness of the computer’s number. We could write the numbers in binary, but they will be more readable if we represent them in base-16:

base-10 base-16
floating point
1 = 1.00 * 2^0
2 = 1.00 * 2^1
10 = 1.40 * 2^3

“1.40?”, you ask, looking at the last row, which indicates 1.40*2^3 for decimal 10.

The period in 1.40 is not a decimal point; it is a hexadecimal point. The first digit after the hexadecimal point is the number for 1/16ths, the next is for 1/(16^2)=1/256ths, and so on. Thus, 1.40 hexadecimal equals 1 + 4*(1/16) + 0*(1/256) = 1.25 in decimal.

And that is how you read %21x values +1.0000000000000X+000, +1.0000000000000X+001, and +1.4000000000000X+003. To wit,

base-10 base-16
floating point
%21x
1 = 1.00 * 2^0 = +1.0000000000000X+000
2 = 1.00 * 2^1 = +1.0000000000000X+001
10 = 1.40 * 2^3 = +1.4000000000000X+003

The mantissa is shown to the left of the X, and, to the right of the X, the exponent for the 2. %21x is nothing more than a binary variation of the %e format with which we are all familiar, for example, 12 = 1.20000e+01 = 1.2*10^1. It’s such an obvious generalization that one would guess it has existed for a long time, so excuse me when I mention that we invented it at StataCorp. If I weren’t so humble, I would emphasize that this human-readable way of representing binary floating-point numbers preserves nearly every aspect of the IEEE floating-point number. Being humble, I will merely observe that 1.40x+003 is more readable than 4024000000000000.

Now that you know how to read %21x, let me show you how you might use it. %21x is particularly useful for examining precision issues.

For instance, the cube root of 8 is 2; 2*2*2 = 8. And yet, in Stata, 8^(1/3) is not equal to 2:

. display 8^(1/3)2

. assert 8^(1/3) == 2
assertion is false
r(9) ; 

. display %20.0g 8^(1/3)
1.99999999999999978

I blogged about that previously; see How Stata calculates powers. The error is not much:

. display 8^(1/3)-2-2.220e-16

In %21x format, however, we can see that the error is only one bit:

. display %21x 8^(1/3)
+1.fffffffffffffX+000

. display %21x 2    
+1.0000000000000X+001

I wish the answer for 8^(1/3) had been +1.0000000000001X+000, because then the one-bit error would have been obvious to you. Instead, rather than being a bit too large, the actual answer is a bit too small — one bit too small to be exact — so we end up with +1.fffffffffffffX+000.

One bit off means being off by 2^(-52), which is 2.220e-16, and which is the number we saw when we displayed in base-10 8^(1/3)-2. So %21x did not reveal anything we could not have figured out in other ways. The nature of the error, however, is more obvious in %21x format than it is in a base-10 format.

On Statalist, the point often comes up that 0.1, 0.2, …, 0.4, 0.6, …, 0.9, 0.11, 0.12, … have no exact representation in the binary base that computers use. That becomes obvious with %21x format:

. display %21x 0.1
+1.999999999999aX-004 . display %21x 0.2 +1.999999999999aX-003. ...

0.5 does have an exact representation, of course, as do all the negative powers of 2:

. display %21x 0.5           //  1/2
+1.0000000000000X-001

. display %21x 0.25          //  1/4
+1.0000000000000X-002

. display %21x 0.125         //  1/8
+1.0000000000000X-003

. display %21x 0.0625        // 1/16
+1.0000000000000X-004

. ...

Integers have exact representations, too:

. display %21x 1
+1.0000000000000X+000

. display %21x 2
+1.0000000000000X+001

. display %21x 3
+1.8000000000000X+001

. ...

. display %21x 10
+1.4000000000000X+003

. ...

. display %21x 10786204
+1.492b380000000X+017

. ...

%21x is a great way of becoming familiar with base-16 (equivalently, base-2), which is worth doing if you program base-16 (equivalently, base-2) computers.

Let me show you something useful that can be done with %21x.

A programmer at StataCorp has implemented a new statistical command. In four examples, the program produces the following results:

41.8479499816895
 6.7744922637939
 0.1928647905588
 1.6006311178207

Without any additional information, I can tell you that the program has a bug, and that StataCorp will not be publishing the code until the bug is fixed!

How can I know that this program has a bug without even knowing what is being calculated? Let me show you the above results in %21x format:

+1.4ec89a0000000X+005
+1.b191480000000X+002
+1.8afcb20000000X-003
+1.99c2f60000000X+000

Do you see what I see? It’s all those zeros. In randomly drawn problems, it would be unlikely that there would be all zeros at the end of each result. What is likely is that the results were somehow rounded, and indeed they were. The rounding in this case was due to using float (4-byte) precision inadvertently. The programmer forgot to include a double in the ado-file.

And that’s one way %21x is used.

I am continually harping on programmers at StataCorp that if they are going to program binary computers, they need to think in binary. I go ballistic when I see a comparison that’s coded as “if (abs(x-y)<1e-8) …” in an attempt to deal with numerical inaccuracy. What kind of number is 1e-8? Well, it’s this kind of number:

. display %21x 1e-8
+1.5798ee2308c3aX-01b

Why put the computer to all that work, and exactly how many digits are you, the programmer, trying to ignore? Rather than 1e-8, why not use the “nice” numbers 7.451e-09 or 3.725e-09, which is to say, 1.0x-1b or 1.0x-1c? If you do that, then I can see exactly how many digits you are ignoring. If you code 1.0x-1b, I can see you are ignoring 1b=27 binary digits. If you code 1.0x-1c, I can see you are ignoring 1c=28 binary digits. Now, how many digits do you need to ignore? How imprecise do you really think your calculation is? By the way, Stata understands numbers such as 1.0x-1b and 1.0x-1c as input, so you can type the precise number you want.

As another example of thinking in binary, a StataCorp programmer once described a calculation he was making. At one point, the programmer needed to normalize a number in a particular way, and so calculated x/10^trunc(log10(x)), and held onto the 10^trunc(log10(x)) for denormalization later. Dividing by 10, 100, etc., may be easy for us humans, but it’s not easy in binary, and it can result in very small amounts of dreaded round-off error. And why even bother to calculate the log, which is an expensive operation? “Remember,” I said, “how floating-point numbers are recorded on a computer: z = a*2^b, where 0 < = |a| < 2. Writing in C, it’s easy to extract components. In fact, isn’t a number normalized to be between 0 and 2 even better for your purposes?” Yes, it turned out it was.

Even I sometimes forget to think in binary. Just last week I was working on a problem and Alan Riley suggested a solution. I thought a while. “Very clever,” I said. “Recasting the problem in powers of 2 will get rid of that divide that caused half the problem. Even so, there’s still the pesky subtraction.” Alan looked at me, imitating a look I so often give others. “In binary,” Alan patiently explained to me, “the difference you need is the last 19 bits of the original number. Just mask out the other digits.”

At this point, many of you may want to stop reading and go off and play with %21x. If you play with %21x long enough, you’ll eventually examine the relationship between numbers recorded as Stata floats and as Stata doubles, and you may discover something you think to be an error. I will discuss that next week in my next blog posting.