New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for Binary constants longer than 64K [CORE5821] #6082
Comments
Commented by: @asfernandes This should not be done in the engine. What should be done is teach ISQL to allow bind variables and to load data (inline or from others files) on bind blob variables. |
Commented by: Nick (nick) Why this shold not be done? This is useful. Even concatenation of parts like in example is more usable than 2 files. |
Modified by: Sean Leyne (seanleyne)summary: Binary constants longer than 64K => Add support for Binary constants longer than 64K |
Commented by: @cincuranet Adriano, even if you teach ISQL to do it, there's plenty of other tools that will not do it. Yes, these can be improved as well, but even then you have plain SQL scripts executed by custom code (i.e. via fbclient.dll) and this will fail again. Yes, one can parse the SQL manually and extract the values to parameters, but as we both agree (hopefully), this is beyond what average developer should do while using Firebird. I'm not saying the limit should be "unlimited". But maybe some reasonable value of say 10MB per statement/binary literal would make sense in most cases. |
Commented by: @asfernandes Jiri, 10 mb is still a low value for many things. The worst situation one can be is to have an unexpected problem difficult to fix urgently. Say, today we have this problem too, but it's clear that literals is for small strings. If we increase to 10 mb, what if one uses the thing to insert a 9.5 mb file, then the file is changed to 10.5 mb and the thing does not work anymore. |
Commented by: @cincuranet I agree with you. I was talking about 10MB, because I feel that's doable given the current state and limitations inside Firebird and might unblock use cases for a some (hopefully significant) portion of users. Of course I'd like to see unlimited size SQL, more than 255 contexts in SPs, more than 32k for plan (or is it 64k nowadays), etc. But that's not going to happen anytime soon (but I'd like to be proven wrong ;)). And it's easier to argue that inserting 10MB+ as a literal isn't really standard use case you one should use params, instead of for 64k, which is "small" given today's numbers. |
Commented by: @cincuranet Or let me approach this differently. Is this a question of making "some buffer bigger" and that's it (assuming there's no regression, etc.)? Or does it need some major re-architecting? If the latter, bummer. But if it's just about some buffer, isn't this simply a low hanging fruit? |
Commented by: @asfernandes A CHAR/VARCHAR is by design (USHORT) limited to 64K. |
Commented by: @sim1984 Jiri why EF in this place is required exactly literal? Why can not use a prepared query with a parameter to which a long string is sent? |
Commented by: @cincuranet Because the migration result is a string(s). Either to execute these (by EF) or a script is saved. |
Commented by: @sim1984 It's about __MigrationHistory when updating a large model. An error occurs: |
Submitted by: Nick (nick)
Votes: 3
Example:
insert into T(BinaryBlob) values(x'DF******');
Now it works, but length of constant is limited (< 64K)
This will allow to export long blob data to script.
Workaround is splitting data to parts (~64K per one part ):
insert into T(BinaryBlob) values(
cast('' as blob sub_type binary) ||
x'AF******' ||
x'D7******' ||
x'B8******' ||
);
This ugly workaround allows to insert blob of large size (up to ~5MB, because maximum SQL size is restricted to 10 MB)
The text was updated successfully, but these errors were encountered: