Floating-point numbers are intended to represent real numbers, and
most of the "interesting" reasoning about floating-point arithmetic is
carried out at the level of reals. Indeed, it is at the real number
level that the fundamentals of high-level operations are specified in
the IEEE 754 Standard governing floating-point arithmetic.
However, not all aspects of floating-point operations can be fully
specified just in terms of the represented real numbers. IEEE
floating-point numbers include additional options that do not
represent real numbers (infinities and "Not a Number"s), and there are
two different "signed" representations of zero. Full correctness
requires that not just the real value, but also the sign of zeros be
correct, and that infinities be handled appropriately, using rules
that are not always obvious.
Moreover, some floating-point architectures go beyond the IEEE 754
Standard and bring further complications such as "pseudo-zeros".
Finally, the new decimal floating-point formats in the draft revision
of the IEEE 754 Standard also feature multiple representations of the
same real value with distinct "scales". For example "2.0" (20*10-1)
and "2.00" (200*10-2) are considered distinct even though they
represent the same value, and the scales of results must also be
correct.
All this means that in floating-point verification, the interplay
between the real values and the bit encodings is important. We will
explain some of the subtleties involved and describe how this issue
has been handled in our verification work so far.