Does "epsilon" really guarantees anything in floating-point computations? -
to make problem short let's want compute expression a / (b - c)
on float
s.
to make sure result meaningful, can check if b
, c
in equal:
float eps = std::numeric_limits<float>::epsilon(); if ((b - c) > eps || (c - b) > eps) { return / (b - c); }
but tests show not enough guarantee either meaningful results nor not failing provide result if possible.
case 1:
a = 1.0f; b = 0.00000003f; c = 0.00000002f;
result: if condition not met, expression produce correct result 100000008 (as floats' precision).
case 2:
a = 1e33f; b = 0.000003; c = 0.000002;
result: if condition met, expression produces not meaningful result +1.#inf00
.
i found more reliable check result, not arguments:
const float inf = numeric_limits<float>::infinity(); float x = / (b - c); if (-inf < x && x < inf) { return x; }
but epsilon , why saying epsilon use?
"you must use epsilon when dealing floats" knee-jerk reaction of programmers superficial understanding of floating-point computations, comparisons in general (not zero).
this unhelpful because doesn't tell how minimize propagation of rounding errors, doesn't tell how avoid cancellation or absorption problems, , when problem indeed related comparison of 2 floats, it doesn't tell value of epsilon right doing.
if have not read what every computer scientist should know floating-point arithmetic, it's starting point. further that, if interested in precision of result of division in example, have estimate how imprecise b-c
made previous rounding errors, because indeed if b-c
small, small absolute error corresponds large absolute error on result. if concern division should not overflow, test (on result) right. there no reason test null divisor floating-point numbers, test overflow of result, captures both cases divisor null , divisor small make result not representable precision.
regarding propagation of rounding errors, there exists specialized analyzers can estimate it, because tedious thing hand.
Comments
Post a Comment