Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FbBulkCopy class for faster import [DNET1041] #952

Closed
firebird-automations opened this issue Apr 15, 2021 · 5 comments
Closed

FbBulkCopy class for faster import [DNET1041] #952

firebird-automations opened this issue Apr 15, 2021 · 5 comments

Comments

@firebird-automations
Copy link

Submitted by: Baldur Fürchau (bfuerchau)

Attachments:
FbBulkCopy.cs

I have developed a class FbBulkCopy, similar the SqlBulkCopy. I can't reach the speed of SQL-Server, but with less columns and small records the speed is good.

I create an execute block (p1...pn) with many insert into table (:p1...:pn); end;

Restrictions are: The size of parameters must be within recordsize 64K, the size of block must be less 64k.
But, for example, an insert with 8 columns speeds up from 8000 to 22.500.

Where can i send/upload my class, so you can check this if it can be used for Firebird client?
Some fieldtypes must be added.

@firebird-automations
Copy link
Author

Commented by: Baldur Fürchau (bfuerchau)

FbBulkCopy class with basic implemtation.

@firebird-automations
Copy link
Author

Modified by: Baldur Fürchau (bfuerchau)

Attachment: FbBulkCopy.cs [ 13590 ]

@firebird-automations
Copy link
Author

Commented by: André Ziegler (andre.ziegler)

fork the code on github ( https://github.com/cincuranet/FirebirdSql.Data.FirebirdClient ), commit the changes to new branch and if you added everything, create a pullrequest so that it can be checked and merged

@firebird-automations
Copy link
Author

Commented by: @cincuranet

It's also good to discuss in the mailing list first, because other people might give valuable feedback on the problems they are trying to solve, so the implementation is done in a way that it's useful for many and not only for one.

@firebird-automations
Copy link
Author

Commented by: Baldur Fürchau (bfuerchau)

After some further investigations i found, that the firebird have too much limits to do a BulkCopy with standard features.

1) An Execute Block can have only restricted number of parameters. The size of bytes have not be bigger than the current rowsize (64K). If you work with UTF8, the buffersize becomes 4 times smaller.
2) An Execute Block can only have maximum 255 insert statements.
3) in Firebird 2.5, the SQL can have 64K, if you work with UTF8 only 16K. If the SQL is perhaps a little bit bigger, i get a network error.
4) in Firbird 3.0, the SQL can have 10MB or 2.5 MB in UTF8, but the limit of 255 Inserts still exists.
5) In many examples multiple inserts can be done only with constants to avoid rowsize limit.
6) the benefit of send more inserts in one block is consumed by the time to convert all values to strings to build the inserts.

I didn't check the break even, but i have checked the following times in I7, 2.6GHz.

Copy a Table form D1 to DB2 with only 8 Columns, the bulk copy is 3 times faster than single insert.
Copy a Table wit 33 Columns is 15% slower than single inserts.
Copy a Table with 72 Columns is 28% slower tthan single inserts.

Conclusion:
I think at the moment that BulkCopy isn't usefull.
The workaround with import from a csv file directly with Firebird/iSQL makes also no sense, if the source must first extract data as csv. This needs more time as an direct read/insert from DB to DB.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants