New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Optimized disk write sequence with forced writes [CORE754] #1129
Comments
Commented by: Alice F. Bird (firebirds) Date: 2005-07-24 16:54 There were no replies, so I consider this tracker item closed. |
Commented by: Alice F. Bird (firebirds) Date: 2005-03-23 13:04 Since v1.5, async writes are controlled by the engine. The |
Commented by: Alice F. Bird (firebirds) Date: 2005-03-23 10:23 Not quite true. On Win32 we currently have two options: 1. Forced writes with disk thrashing. 2. No forced writes with potentially indefinate delays What I'm asking is something inbetween. Option 2 is What does IB:s group commit do? Could that be implemented in FB? |
Commented by: Alice F. Bird (firebirds) Date: 2005-03-23 10:08 Pages are written in special order to ensure database |
Modified by: @pcisarWorkflow: jira [ 10778 ] => Firebird [ 15172 ] |
Submitted by: @krilbe
SFID: 1091805#
Submitted By: krilbe
With forced writes, on every transaction commit, the
harddisk can clearly be heard to do a large number of
seeks, i.e. the disk head vibrates for a short period
of time. We're talking a tenth of a second or so, but
still clearly audible.
I suspect that these seeks are one of the biggest
bottlenecks in Firebird write performance. In a batch I
ran it sounded as if about 1/10-1/3 of the time was
spent on harddisk writes when committing a couple of
times/second. Trying a few different commit frequencies
seemed to confirm this.
So, would it be possible to reorder the disk operations
on transaction commit to reduce drive seeks? I.e. first
collect (read) all info required for the commit, put
all writes in a queue, put the entries of the write
queue in DB file offset order or something, and the write?
The text was updated successfully, but these errors were encountered: