Issue Details (XML | Word | Printable)

Key: CORE-6216
Type: Bug Bug
Status: Resolved Resolved
Resolution: Fixed
Priority: Major Major
Assignee: Dmitry Yemanov
Reporter: Alex Kurov
Votes: 0
Watchers: 3
Operations

If you were logged in you would be able to see more operations.
Firebird Core

request size limit exceeded when selecting from a table with large number of indicies

Created: 29/Dec/19 11:08 AM   Updated: 07/Aug/20 11:40 AM
Component/s: Engine
Affects Version/s: 3.0.4
Fix Version/s: 4.0 Beta 2, 3.0.6

File Attachments: 1. File core-6216.sql (27 kB)

Environment: Win10 SuperServer
Issue Links:
Relate
 

QA Status: Done successfully


 Description  « Hide
Error message

request size limit exceeded
GDSCODE = 335544382
SQLCODE = -901

We've came across this message when using 2.5 and we thought that it is because of the SQL statement size limit.
But we still see it in 3.0 (http://www.firebirdfaq.org/faq197/).
And we get this message executing statements with length less than 64k.
We've tried to reproduce such behavior on a test example and came to conclusion,
that the error is rased when you combine using IN operator and select statement from a table with large number of indicies.
It is very strange, so we created test example with one table and 20 indicies and got an error when selecting non indexed field from the table.
If we decrease the number of indicies then the error will disapear.


 All   Comments   Change History   Subversion Commits      Sort Order: Ascending order - Click to sort in descending order
Dmitry Yemanov added a comment - 29/Dec/19 11:54 AM
This error means that the compiled statement requires more than 10MB of memory. A kind of protection against misuse, worth thinking about re-writing the query in some other way.

Karol Bieniaszewski added a comment - 29/Dec/19 12:39 PM
You should avoid such queries.
Use temporary table for storing values and simply join with it instead huge number of IN values.

Alex Kurov added a comment - 29/Dec/19 01:03 PM
Of course I understand that such statements are kind of bad manner. But this is auto generated sql in report builder tool.
This report tool needs the refactoring of course, but at the moment I just wondering why the compiled sql is so big?
Why the number of indicies (as I see - only the number because the table is empty) affects the size?
If I decrease the number of indicies to 19 and execute this statement I'll see the plan - I see no index reads

PLAN (T NATURAL)
PLAN (T NATURAL)
PLAN (T NATURAL)
PLAN (RDB$DATABASE NATURAL)

Sean Leyne added a comment - 29/Dec/19 11:01 PM
@Alex,

Please post the PLAN for the original query (with indexes)

Sean Leyne added a comment - 29/Dec/19 11:04 PM
@Dmitry,

1- It would seem appropriate to update the message, in 2 regards; (i) provide context that the size of the compiled request is the problem, and (ii) that the limit is 10MB. No?

2- IMO, given the amount of memory in modern systems,10MB seems like a really low setting.

Alex Kurov added a comment - 30/Dec/19 08:28 AM
@Sean
> Please post the PLAN for the original query (with indexes)
I would do so, but it does not work.

Database: C:\temp\TEST_DATABASE.FDB, User: SYSDBA
SQL> set explain;
SQL> set planonly;
SQL> input C:\temp\test.txt;
Statement failed, SQLSTATE = HY000
request size limit exceeded
After line 0 in file C:\temp\test.txt

The plan (in previous comment) was taken when there was 19 indicies, because I get an error with 20 indicies in this example.
But I should note again, that this statement does not include any indexed fields, so it is very strange that the number of indicies affects the result

Dmitry Yemanov added a comment - 30/Dec/19 01:03 PM
I've moved the test case to an attachment to make the ticket readable.

Dmitry Yemanov added a comment - 30/Dec/19 02:09 PM
The problem here is that the optimizer adds hidden casts for better matching indices. And this is done for the every available index. But if matching fails, these casts remain alive and consume request's static memory, even although they're not going to be used during the query execution. This approach should be reworked, as now it explodes the request memory consumption from ~100KB to ~10MB.

Alex Kurov added a comment - 31/Jan/20 10:22 AM
Do you have any plans for reworking this feature?)

Dmitry Yemanov added a comment - 31/Jan/20 10:41 AM
Yes, it's in the work already.

Dmitry Yemanov added a comment - 07/Feb/20 05:00 AM
Please try the fresh snapshot build (33242).