One interesting aspect of floating-point is that the chosen
rounding-mode can effect the results of a computation if the exact
result cannot be precisely represented. SBV exports the functions
fpAdd,
fpSub,
fpMul,
fpDiv,
fpFMA
and
fpSqrt which allows users to specify the IEEE supported
RoundingMode for the operation. This example illustrates how
SBV can be used to find rounding-modes where, for instance, addition
can produce different results. We have:
>>> roundingAdd
Satisfiable. Model:
rm = RoundTowardPositive :: RoundingMode
x = -4.0039067 :: Float
y = 131076.0 :: Float
(Note that depending on your version of Z3, you might get a different
result.) Unfortunately Haskell floats do not allow computation with
arbitrary rounding modes, but SBV's
SFloatingPoint type does.
We have:
>>> sat $ \x -> x .== (fpAdd sRoundTowardPositive (-4.0039067) 131076.0 :: SFloat)
Satisfiable. Model:
s0 = 131072.0 :: Float
>>> (-4.0039067) + 131076.0 :: Float
131071.99
We can see why these two results are indeed different: The
'RoundTowardPositive (which rounds towards positive infinity) produces
a larger result.
>>> (-4.0039067) + 131076.0 :: Double
131071.9960933
we see that the "more precise" result is larger than what the
Float value is, justifying the larger value with
'RoundTowardPositive. A more detailed study is beyond our current
scope, so we'll merely note that floating point representation and
semantics is indeed a thorny subject, and point to
http://ece.uwaterloo.ca/~dwharder/NumericalAnalysis/02Numerics/Double/paper.pdf
as an excellent guide.