We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Submitted by: Petr Kristan (epospro)
Attachments: a
Database: ODS = 11.1 Default Character set: WIN1250
Table column: BLOB segment 80, subtype TEXT Nullable Client charset: UTF-8
I'am writing utf-8 text blob (in attached file) size 2172B by 3 calls: isc_put_segment(status, &to_blob, 1024, buffer) isc_put_segment(status, &to_blob, 1024, buffer) isc_put_segment(status, &to_blob, 124, buffer)
When blob_id is updated into table i get error: "Cannot transliterate character between character sets".
If the blob is puted by single large buffer: isc_put_segment(status, &to_blob, 2172, buffer) everythink is ok.
The text was updated successfully, but these errors were encountered:
Commented by: Petr Kristan (epospro)
Utf-8 example in czech language.
Sorry, something went wrong.
Attachment: a [ 11907 ]
Commented by: @ibprovider
CORE2122
?
I use UTF-8 client charset, not UNICODE_FSS and inserted text is utf-8 valid. But your problem may have the same base.
>I use UTF-8 client charset, not UNICODE_FSS and inserted text is utf-8 valid.
And what?
This is common (old) issue with transliteration of BLOB data between different charsets.
Was resolved in FB2.5.
I have done some aditional tests. If I shrinked buffer to 1 byte, then is not possible to insert any multibyte encoded utf-8 code point!
//Not compilable example buffer='á'; //U+00E1 utf-8:c3 a1 isc_put_segment(status, &to_blob, 1, buffer) ; isc_put_segment(status, &to_blob, 1, buffer+1) ;
No branches or pull requests
Submitted by: Petr Kristan (epospro)
Attachments:
a
Database:
ODS = 11.1
Default Character set: WIN1250
Table column: BLOB segment 80, subtype TEXT Nullable
Client charset: UTF-8
I'am writing utf-8 text blob (in attached file) size 2172B by 3 calls:
isc_put_segment(status, &to_blob, 1024, buffer)
isc_put_segment(status, &to_blob, 1024, buffer)
isc_put_segment(status, &to_blob, 124, buffer)
When blob_id is updated into table i get error: "Cannot transliterate character between character sets".
If the blob is puted by single large buffer:
isc_put_segment(status, &to_blob, 2172, buffer)
everythink is ok.
The text was updated successfully, but these errors were encountered: