gawk: Floating point summary

 
 15.8 Summary
 ============
 
    * Most computer arithmetic is done using either integers or
      floating-point values.  Standard 'awk' uses double-precision
      floating-point values.
 
    * In the early 1990s Barbie mistakenly said, "Math class is tough!"
      Although math isn't tough, floating-point arithmetic isn't the same
      as pencil-and-paper math, and care must be taken:
 
         - Not all numbers can be represented exactly.
 
         - Comparing values should use a delta, instead of being done
           directly with '==' and '!='.
 
         - Errors accumulate.
 
         - Operations are not always truly associative or distributive.
 
    * Increasing the accuracy can help, but it is not a panacea.
 
    * Often, increasing the accuracy and then rounding to the desired
      number of digits produces reasonable results.
 
    * Use '-M' (or '--bignum') to enable MPFR arithmetic.  Use 'PREC' to
      set the precision in bits, and 'ROUNDMODE' to set the IEEE 754
      rounding mode.
 
    * With '-M', 'gawk' performs arbitrary-precision integer arithmetic
      using the GMP library.  This is faster and more space-efficient
      than using MPFR for the same calculations.
 
    * There are several areas with respect to floating-point numbers
      where 'gawk' disagrees with the POSIX standard.  It pays to be
      aware of them.
 
    * Overall, there is no need to be unduly suspicious about the results
      from floating-point arithmetic.  The lesson to remember is that
      floating-point arithmetic is always more complex than arithmetic
      using pencil and paper.  In order to take advantage of the power of
      floating-point arithmetic, you need to know its limitations and
      work within them.  For most casual use of floating-point
      arithmetic, you will often get the expected result if you simply
      round the display of your final results to the correct number of
      significant decimal digits.
 
    * As general advice, avoid presenting numerical data in a manner that
      implies better precision than is actually the case.