Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

make it possible to see all 17 significant digits for double precision values in isql [CORE3960] #4293

Open
firebird-automations opened this issue Oct 25, 2012 · 8 comments

Comments

@firebird-automations
Copy link
Collaborator

Submitted by: Paulius Pazera (ppazera)

when we have slightly different double precision values in database, e.g.

1 (stored as 3ff0 0000 0000 000016 64-bit double precision value, see http://en.wikipedia.org/wiki/Double_precision)
≈ 1.0000000000000002 (3ff0 0000 0000 000116)
≈ 1.0000000000000004 (3ff0 0000 0000 000216, more precise decimal representation could be found via http://www.binaryconvert.com)

in the code they are correctly treated as different values,
but currently isql does not have a way to display all 17 significant digits and they look the same:

select cast(1.0000000000000000 as double precision) firstValue,
/* isql incorrectly shows following two values being the same as first one */
cast(1.0000000000000002 as double precision) secondValue,
cast(1.0000000000000004 as double precision) thirdValue,
/* following is just to show that those two double precision values are really different */
cast(1.0000000000000004 as double precision)-cast(1.0000000000000002 as double precision) difference,
/* as opposed to following two values which have the same 64-bit binary representation */
cast(1.00000000000000021 as double precision)-cast(1.00000000000000024 as double precision) no_difference,
/* following is to show that comparison works as expected on those values */
(select 'different' from rdb$database where cast(1.0000000000000004 as double precision)<>cast(1.0000000000000002 as double precision)) different,
(select 'equal' from rdb$database where cast(1.0000000000000002 as double precision)=cast(1.0000000000000002 as double precision)) equal
from rdb$database;

         FIRSTVALUE             SECONDVALUE              THIRDVALUE              DIFFERENCE           NO\_DIFFERENCE DIFFERENT EQUAL  

======================= ======================= ======================= ======================= ======================= ========= ======
1.000000000000000 1.000000000000000 1.000000000000000 2.220446049250313e-16 0.0000000000000000 different equal

that confuses user, makes him think that values in DB are identical, that the code should produce one result, but he sees completely different results, and even may start to suspect that code isn't working fine, that comparison operator doesn't work well on floating point values. Basically it makes code debugging difficult

if isql had a way to display all 17 significant digits, then user would see that all three values are different, e.g.

         FIRSTVALUE             SECONDVALUE              THIRDVALUE              

======================= ======================= =======================
1.0000000000000000 1.0000000000000002 1.0000000000000004

and then he would see which code blocks are really executed and will understand easier why he gets the result he sees

please note that this request is not about floating point decimal to binary conversion issues which introduces some discrepancy (i.e. not about casting string to double precision), and not about calculations which could increase that discrepancy significantly making e.g. 13th significant digit unreliable (i.e. not about how those values were derived, not about if can trust those values up to the last bit/digit, not about which digit is unreliable), it's just about ability to see what we already have in database (if two values are different only by last significant 17th digit, then we want ability to see that using isql)

other users may want to see only 2 or 4 digits after decimal point, so probably it would be good idea to extend this request and allow user to specify double precision formatting mask for isql, e.g. via command line switch/option or something else

@firebird-automations
Copy link
Collaborator Author

Commented by: Sean Leyne (seanleyne)

The Firebird implementation of DOUBLE PRECISION supports only **15** digits of precision (16 digits in total).

ISQL is returning 16 digits, which is the correct level of precision, so there is no error.

@firebird-automations
Copy link
Collaborator Author

Modified by: Sean Leyne (seanleyne)

status: Open [ 1 ] => Resolved [ 5 ]

resolution: Won't Fix [ 2 ]

@firebird-automations
Copy link
Collaborator Author

Commented by: Paulius Pazera (ppazera)

I believe that your comment is incorrect

I demonstrated that internally Firebird stores and for calculations/comparison uses 17 significant digits (e.g. 1.2345678901234567 and 1.2345678901234569 are different values), and that the problem is that isql shows only 16 for both values hiding last significant digit making different values look the same (1.234567890123457). And that's confusing

ISQL shows decimal representation with only 16 significant digits. API returns correct binary representation as it is stored in DB (thus applications can show all 17 significant digits when needed)

here is another example demonstrating the problem:

create table tmp(dp1 double precision, dp2 double precision);
commit;
insert into tmp values(1.2345678901234567, 1.2345678901234569);
commit;
select * from tmp; /* isql incorrectly shows them as the same values */
/* but internally they are different: */
select * from tmp where dp1=dp2; /* according to you we should see something here, but we don't because values are actually different */
select * from tmp where dp1<>dp2; /* according to you we should not see anything here, but we do because values are actually different */
select cast(dp1 as numeric(18,16)) dp1, cast(dp2 as numeric(18,16)) dp2 from tmp; /* you need to cast every field to see real values */
drop table tmp;
commit;

please reconsider your decision

@firebird-automations
Copy link
Collaborator Author

Modified by: @pcisar

status: Resolved [ 5 ] => Closed [ 6 ]

@firebird-automations
Copy link
Collaborator Author

Commented by: @dyemanov

While the 16th digit may be unreliable in double precision, it doesn't mean it always is. The real precision is ~7.2 decimal digits for float and ~15.9 decimal digits for double, so there may be cases when the last digit is significant.

Just for the reference, these "~1.0" values are shown as different if forcibly printed in the exponential form. So this really smells like an ISQL output bug to me.

@firebird-automations
Copy link
Collaborator Author

Commented by: @dyemanov

In fact, ISQL supports 8 decimal digits of precision for floats and 16 decimal digits of precision for double. However, floats are printed using the "f" type modifier of printf while doubles are printed using the "g" type modifier of printf. But let's look at the docs regarding the specified decimal precision:

For a, A, e, E, f and F specifiers: this is the number of digits to be printed after the decimal point (by default, this is 6).
For g and G specifiers: This is the maximum number of significant digits to be printed.

I suppose the difference explains the lost 16th digit in the output. If ISQL is modified to print doubles using either "f" or "e" modifiers, the issue disappears.

@firebird-automations
Copy link
Collaborator Author

Commented by: Sean Leyne (seanleyne)

Re-opened based on Dmitry's analysis/comments.

@firebird-automations
Copy link
Collaborator Author

Modified by: Sean Leyne (seanleyne)

status: Closed [ 6 ] => Reopened [ 4 ]

resolution: Won't Fix [ 2 ] =>

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant