Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can no longer connect to database when using isc_dpb_num_buffers parameter [CORE2518] #2928

Closed
firebird-automations opened this issue Jun 20, 2009 · 11 comments

Comments

@firebird-automations
Copy link
Collaborator

Submitted by: Mark Jones (mjnz)

The connection fails with error isc_bad_dpb_content. Worked in all versions up to and including 2.5 Alpha 1. I believe that the engine should silently ignore this attempt to set buffers for super server because the client has no idea if it is connecting to superserver or classic.

The problem is in jrd/jrd.cpp at around Line 4389
case isc_dpb_num_buffers:
dpb_buffers = rdr.getInt();
#⁠ifndef SUPERSERVER
if (dpb_buffers < 10)
#⁠endif
{
ERR_post(Arg::Gds(isc_bad_dpb_content));
}
break;

Commits: eb92d79

@firebird-automations
Copy link
Collaborator Author

Commented by: @dyemanov

Probably you're right. But, from another side, if the client application specifies a concrete cache size, it does so intentionally. It doesn't seem good to silently ignore the explicit client intentions. I don't have any strong opinion on this subject.

@firebird-automations
Copy link
Collaborator Author

Commented by: @hvlad

If client application depends on cache size it is better to give to it ability to query effective cache size and let application decide what to do.
I don't think we must reject attempt to set cache size via DPB.
Also i don't think the SS\CS dependent code is a good thing.

Just my 0.02 UAH ;)

@firebird-automations
Copy link
Collaborator Author

Commented by: Mark Jones (mjnz)

Yes a difficult one...I do wonder how often this option is used during connection, it might be rare and thus the problem small. (See below why we were doing this)
Although I am starting to think that specifying the value on super server is potentially incorrect behavior and the default should be to error. But perhaps it might need a config setting that can override this and allow it to error, ignore or allow. I am going to change our application to avoid this error (also see below)

The the application I am working with has multiple databases and each one is used for different things so different caching strategies are required for each one.
I expect that since super server has a shared buffer pool and it gets created on the first connection to the database that will mean that whichever client connects first is going to be the one controlling the buffer size. And this might not be desirable if that is a remote connection and has no idea of the server firebird version/capabilities/memory etc (or did I read that somewhere?)

I think a better option for our application is to set the default buffers in the database header (i.e. gfix -buffers) and then we can adjust them if we need to. That will also mean that we can detect what version we are running on the machine that hosts the database when we set the value and we can set it to a high shared value or a lower per connection value. So that will solve my problem.

@firebird-automations
Copy link
Collaborator Author

Commented by: Sean Leyne (seanleyne)

Mark et al,

Is this functionality which we should continue to support, or, should we deprecate the parameter and silently ignore it, if passed?

Personally, I don't see why the functionality is required. The ability to set the database cache size via API has been around for a while, I suspect this is a carry-forward from a time when that wasn't the case.

@firebird-automations
Copy link
Collaborator Author

Modified by: @AlexPeshkoff

assignee: Alexander Peshkov [ alexpeshkoff ]

@firebird-automations
Copy link
Collaborator Author

Commented by: @AlexPeshkoff

Sean, I suppose you mix isc_dpb_set_page_buffers and isc_dpb_num_buffers. The last one does not change DB header, this is runtime setting. This functionality makes good sense for CS - it may be useful to have different cache size per different connections (i.e. different processes). If connection is known to take long time, but have slow database activity, it's very good practice for CS to ask for small cache, but when activity is known to be high - for big one.

IMO this option makes no sense for SS with it's shared cache, but code contained a bug when cache size might be altered by first connection to database (and it might be non-SYSDBA).

On my mind now we should decide - do we ignore this parameter silently for SS or raise an error. The second option is correct one, but it raises backward compatibility problems. We should make a decision better sooner than later, cause code was backported to 2.1 codebase, and is present in 2.1.3 RC1. May be something to be changed before 2.1.3 RC2?

PS. Ignoring DPB parameters is not so awful as it can seem. By standard server can ognore unknown to it DPB parameters, and we may treat this parameter as 'unknown for SS'.

@firebird-automations
Copy link
Collaborator Author

Commented by: @dyemanov

Well, I have no objections against silently ignoring this option in SuperServer.

@firebird-automations
Copy link
Collaborator Author

Commented by: @AlexPeshkoff

Firebird does ignore unknown DPB parameters, and mentioned one is IMO unknown for SS. Therefore let's ignore it.

@firebird-automations
Copy link
Collaborator Author

Modified by: @AlexPeshkoff

status: Open [ 1 ] => Resolved [ 5 ]

resolution: Fixed [ 1 ]

Fix Version: 2.5 RC1 [ 10300 ]

@firebird-automations
Copy link
Collaborator Author

Modified by: @pcisar

status: Resolved [ 5 ] => Closed [ 6 ]

@firebird-automations
Copy link
Collaborator Author

Modified by: @pavel-zotov

QA Status: No test

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants