Issue Details (XML | Word | Printable)

Key: DNET-944
Type: Bug Bug
Status: Resolved Resolved
Resolution: Fixed
Priority: Major Major
Assignee: Jiri Cincura
Reporter: tonim
Votes: 0
Watchers: 0
Operations

If you were logged in you would be able to see more operations.
.NET Data provider

Resize compression buffer as needed in decompression

Created: 30/Jul/20 01:24 PM   Updated: 19/Aug/20 09:10 AM
Component/s: ADO.NET Provider
Affects Version/s: 7.5.0.0
Fix Version/s: vNext

Environment: Firebird.net provider selecting rows, more frequent selecting data containing blobs, for example SELECT * FROM RDB$PROCEDURES on a database width some procedures. You can reproduce the bug easily setting the CompressionBufferSize to 8192 for example

Sub-Tasks  All   Open   

 Description  « Hide
Any environment, when selecting Compression=true in Connection string, more common selecting packetsize=32000

A big constant decompression buffer size is defined in FirebirdNetwordStream.
const int CompressionBufferSize = 1 * 1024 * 1024;

Any decompression bigger than this size throws an exception in HandleDecompression function.

I provide a tested fix, the buffer will grow dynamically, depending on the uncompressed size.

                // There is no need to define a big buffer size, it will grow as needed
const int CompressionBufferSize = 32000;



int HandleDecompression(byte[] buffer, int count)
{
_decompressor.OutputBuffer = _compressionBuffer;
_decompressor.InputBuffer = buffer;
_decompressor.NextOut = 0;
_decompressor.NextIn = 0;
_decompressor.AvailableBytesIn = count;
do
{
// Double the buffer size until the decompression fits in the output buffer
_decompressor.OutputBuffer = _compressionBuffer;
_decompressor.AvailableBytesOut = _compressionBuffer.Length - _decompressor.NextOut;
var rc = _decompressor.Inflate(Ionic.Zlib.FlushType.None);
if (rc != Ionic.Zlib.ZlibConstants.Z_OK)
throw new IOException($"Error '{rc}' while decompressing the data.");
if (_decompressor.AvailableBytesIn != 0)
{
byte[] newCompressionBuffer = new byte[_compressionBuffer.Length * 2];
Array.Copy(_compressionBuffer, newCompressionBuffer, _decompressor.NextOut);
_compressionBuffer = newCompressionBuffer;
}
} while (_decompressor.AvailableBytesIn != 0);
return _decompressor.NextOut;
}




 All   Comments   Change History   Subversion Commits      Sort Order: Descending order - Click to sort in ascending order
Jiri Cincura added a comment - 17/Aug/20 11:20 AM - edited
Thanks. I changed the code to resize the buffer as needed (just using Array.Resize for easier/better code). Now I think I have to handle the compression as well, because with smaller buffer it's easier to not have enough to compress complete `buffer`.

tonim added a comment - 03/Aug/20 07:46 AM
Here is a link with a very simple proyect doing a connection and a reading data with DataReader, reproducing the bug "decompression buffer too small".
It also contains the database (metadata only), copy the database to a path and change the connection string updating the path.
It uses the latest nuget package of Firebird Client.

https://www.dropbox.com/s/e8k3i0tymjq2bo7/repos.zip?dl=0


Jiri Cincura added a comment - 03/Aug/20 04:57 AM
> You can reproduce the bug easily in any database by setting a default decompressed buffer size of 8000 for example (and of course Compression flag in connection string)

Sure. One can make the buffer 1 byte and it will fail.

> If you need a sample project including a database reproducing the bug with the 1 megabyte buffer size I will provide you.

Yeah, that would be interesting to see.

tonim added a comment - 01/Aug/20 04:58 PM
Not exactly, I also altered some connection string parameters like packet size set to 32000.

Anyway you can't expect variable decompressed data will allways fit in a fixed buffer, so in my opinion the current implementation of HandleDecompresion is wrong. I think there is not a maximum compression ratio (only typical compression ratios for some types of contents).

You can reproduce the bug easily in any database by setting a default decompressed buffer size of 8000 for example (and of course Compression flag in connection string)

If you need a sample project including a database reproducing the bug with the 1 megabyte buffer size I will provide you.

Thanks.



Jiri Cincura added a comment - 31/Jul/20 05:57 AM
Do I understand correctly that you overrun the default buffer size when simply selecting from SELECT * FROM RDB$PROCEDURES?