Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GSTAT output is incorrect for tables with more than 2 billion records [CORE2519] #2929

Closed
firebird-automations opened this issue Jun 21, 2009 · 8 comments

Comments

@firebird-automations
Copy link
Collaborator

Submitted by: @ibaseru

Seems that output (or counter) for table record count uses 4 byte integer instead of 8 byte integer, and also uses signed integer.
For example, statistics of the table where there are ~3.7 billion records - it is shown as negative value

ORDER_LINE (136)
Primary pointer page: 156, Index root page: 157
Average record length: 60.09, total records: -574915500
Average version length: 0.00, total versions: 0, max versions: 0
Data pages: 22958125, data page slots: 22958125, average fill: 76%
Fill distribution:
0 - 19% = 0
20 - 39% = 0
40 - 59% = 1
60 - 79% = 22958124
80 - 99% = 0

If this value is converted to hex, and then back to decimal as unsighed, it will be 3720051796 which is close to real record count number. Anyway, 4 byte integer, even unsigned, is not enough, because Firebird 2.x have 48 bit record number limit, not 32bit.

p.s. please, also check GSTAT for other places where overflow can happen, for example, if any index will be created on that table number of nodes (keys) also may be shown incorrectly, if wrong variables are used for that numbers.

Commits: 1b1b603 b0d1a53

====== Test Details ======

I've understand "more than 2 billion records" as more than 2'000'000'000.
Such database will occupy too much disk space so nowadays test can`t be implemented.

@firebird-automations
Copy link
Collaborator Author

Modified by: @hvlad

assignee: Vlad Khorsun [ hvlad ]

@firebird-automations
Copy link
Collaborator Author

Modified by: @hvlad

status: Open [ 1 ] => Resolved [ 5 ]

resolution: Fixed [ 1 ]

Fix Version: 2.1.4 [ 10361 ]

Fix Version: 2.0.6 [ 10303 ]

@firebird-automations
Copy link
Collaborator Author

Commented by: Claudio Valderrama C. (robocop)

Changed the affected and fixed versions.

@firebird-automations
Copy link
Collaborator Author

Modified by: Claudio Valderrama C. (robocop)

Version: 2.5 Beta 1 [ 10251 ]

Version: 2.0.5 [ 10222 ]

Version: 2.1.1 [ 10223 ]

Version: 2.5 Alpha 1 [ 10224 ]

Version: 2.0.4 [ 10211 ]

Version: 2.1.0 [ 10041 ]

Version: 2.0.3 [ 10200 ]

Version: 2.0.2 [ 10130 ]

Version: 2.0.1 [ 10090 ]

Version: 2.0.0 [ 10091 ]

Version: 2.1.3 [ 10302 ]

Fix Version: 2.5 RC1 [ 10300 ]

Fix Version: 2.0.6 [ 10303 ] =>

@firebird-automations
Copy link
Collaborator Author

Commented by: Sergey Mereutsa (green_dq)

Just curious: what hardware handle this databse?

@firebird-automations
Copy link
Collaborator Author

Modified by: @pcisar

status: Resolved [ 5 ] => Closed [ 6 ]

@firebird-automations
Copy link
Collaborator Author

Modified by: @pavel-zotov

QA Status: No test

@firebird-automations
Copy link
Collaborator Author

Modified by: @pavel-zotov

status: Closed [ 6 ] => Closed [ 6 ]

QA Status: No test => Cannot be tested

Test Details: I've understand "more than 2 billion records" as more than 2'000'000'000.
Such database will occupy too much disk space so nowadays test can`t be implemented.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment