
I believe that your comment is incorrect
I demonstrated that internally Firebird stores and for calculations/comparison uses 17 significant digits (e.g. 1.2345678901234567 and 1.2345678901234569 are different values), and that the problem is that isql shows only 16 for both values hiding last significant digit making different values look the same (1.234567890123457). And that's confusing ISQL shows decimal representation with only 16 significant digits. API returns correct binary representation as it is stored in DB (thus applications can show all 17 significant digits when needed) here is another example demonstrating the problem: create table tmp(dp1 double precision, dp2 double precision); commit; insert into tmp values(1.2345678901234567, 1.2345678901234569); commit; select * from tmp; /* isql incorrectly shows them as the same values */ /* but internally they are different: */ select * from tmp where dp1=dp2; /* according to you we should see something here, but we don't because values are actually different */ select * from tmp where dp1<>dp2; /* according to you we should not see anything here, but we do because values are actually different */ select cast(dp1 as numeric(18,16)) dp1, cast(dp2 as numeric(18,16)) dp2 from tmp; /* you need to cast every field to see real values */ drop table tmp; commit; please reconsider your decision While the 16th digit may be unreliable in double precision, it doesn't mean it always is. The real precision is ~7.2 decimal digits for float and ~15.9 decimal digits for double, so there may be cases when the last digit is significant.
Just for the reference, these "~1.0" values are shown as different if forcibly printed in the exponential form. So this really smells like an ISQL output bug to me. In fact, ISQL supports 8 decimal digits of precision for floats and 16 decimal digits of precision for double. However, floats are printed using the "f" type modifier of printf while doubles are printed using the "g" type modifier of printf. But let's look at the docs regarding the specified decimal precision:
For a, A, e, E, f and F specifiers: this is the number of digits to be printed after the decimal point (by default, this is 6). For g and G specifiers: This is the maximum number of significant digits to be printed. I suppose the difference explains the lost 16th digit in the output. If ISQL is modified to print doubles using either "f" or "e" modifiers, the issue disappears. Reopened based on Dmitry's analysis/comments.

ISQL is returning 16 digits, which is the correct level of precision, so there is no error.