GSTAT output is incorrect for tables with more than 2 billion records [CORE2519] #2929
Labels
affect-version: 2.0.0
affect-version: 2.0.1
affect-version: 2.0.2
affect-version: 2.0.3
affect-version: 2.0.4
affect-version: 2.0.5
affect-version: 2.1.0
affect-version: 2.1.1
affect-version: 2.1.2
affect-version: 2.1.3
affect-version: 2.5 Alpha 1
affect-version: 2.5 Beta 1
component: gstat
fix-version: 2.1.4
fix-version: 2.5 Beta 2
priority: major
qa: cannot be tested
type: bug
Submitted by: @ibaseru
Seems that output (or counter) for table record count uses 4 byte integer instead of 8 byte integer, and also uses signed integer.
For example, statistics of the table where there are ~3.7 billion records - it is shown as negative value
ORDER_LINE (136)
Primary pointer page: 156, Index root page: 157
Average record length: 60.09, total records: -574915500
Average version length: 0.00, total versions: 0, max versions: 0
Data pages: 22958125, data page slots: 22958125, average fill: 76%
Fill distribution:
0 - 19% = 0
20 - 39% = 0
40 - 59% = 1
60 - 79% = 22958124
80 - 99% = 0
If this value is converted to hex, and then back to decimal as unsighed, it will be 3720051796 which is close to real record count number. Anyway, 4 byte integer, even unsigned, is not enough, because Firebird 2.x have 48 bit record number limit, not 32bit.
p.s. please, also check GSTAT for other places where overflow can happen, for example, if any index will be created on that table number of nodes (keys) also may be shown incorrectly, if wrong variables are used for that numbers.
Commits: 1b1b603 b0d1a53
====== Test Details ======
I've understand "more than 2 billion records" as more than 2'000'000'000.
Such database will occupy too much disk space so nowadays test can`t be implemented.
The text was updated successfully, but these errors were encountered: