New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Nbackup as online dump [CORE2216] #2644
Comments
Modified by: @AlexPeshkoffassignee: Alexander Peshkov [ alexpeshkoff ] |
Commented by: Sean Leyne (seanleyne) I do not understand the details of the case, could you please elaborate. |
Commented by: @AlexPeshkoff I' ve understood that it's desired to add a switch, making possible to create level-0 backup exactly in format of firebird database file. This can seriously speed-up restore, cause level 0 backup is typically big file, and restoring it takes much more time compared with applying over it level 1, 2, etc. |
Commented by: @asfernandes I suggest you read fb-achitect thread named NBAK started on 11-apr-2008 before extend nbackup with simple new switches. In fact, what the user suggested is a way to merge increments with the initial backup. As you see in the discussion, a similar approach may be used but for much more interesting things. |
Commented by: Smirnoff Serg (wildsery) Yes, i'd asking something like "page-level online replication (dump)". Adriano can i read that thread or access to fb-architect is denied? Please give me the link. |
Commented by: @asfernandes Just subscribe to the group and read the archives: http://groups.yahoo.com/group/Firebird-Architect/ |
Commented by: @samofatov We discussed this issue with Vlad and Dmitry before. One way to do it would be to convert incremental backup to .delta file format, and than make engine do online merge of delta into RO database. The merge would appear instantaneous and atomic to the online readers of the database, and consistency issues should not arise. |
Commented by: @hvlad > We discussed this issue with Vlad and Dmitry before. One way to do it would be to convert incremental backup to .delta file format, and than make engine do online merge of delta into RO database. The merge would appear instantaneous and atomic to the online readers of the database, and consistency issues should not arise. Or just copy .delta file into specified location when merge process on backed up database is done. |
Commented by: @asfernandes I think nested BEGIN BACKUPS should be allowed. In this case, it will be really incremental backup and user can use previous delta as binlogs. And of course, a way to merge deltas in the database, and maybe delta+delta files. |
Commented by: @AlexPeshkoff It seems to me that what you can get here is at least for 95% will duplicate database SHADOW feature. The main dufference is that in case of using shadows we have no delay in replication, i.e. when COMMIT finishes data is stored in both primary database file and it's shadow. |
Commented by: @hvlad Adriano, i don't understand what yo uwrote about nested BEGIN BACKUP's. Probably its a time to move discussion into more appropriate place (fb-architect) Alex, do you able to place shadow at another machine *and* attach it by another Firebird instance runnig on that machine ? |
Commented by: @AlexPeshkoff do you able to place shadow at another machine *and* attach it by another Firebird instance runnig on that machine Certainly, for task2 incremental backup is preferred. But I must mention that in such case deltas should be merged by engine on that second machine. I.e. all we need is to learn to merge deltas when database (certainly, RO) is online? |
Commented by: @hvlad -------- |
Commented by: @asfernandes First begin backup creates a delta file. Second begin backup creates another delta file. The first one, as well as the database is not touched by the engine anymore. It (first delta) can be manually transfered to another machine and used with the original backuped database. Or we can have builtin replication. |
Commented by: @hvlad Alex> Certainly, for task2 incremental backup is preferred. But I must mention that in such case deltas should be merged by engine on that second machine. I.e. all we need is to learn to merge deltas when database (certainly, RO) is online? Exactly, as Nickolay wrote. And we need to convert backup file into delta format. Transparent for user. |
Commented by: @hvlad Adriano, ---- Second begin backup creates another delta file. The first one, as well as the database is not touched by the engine anymore. It (first delta) can be manually transfered to another machine and used with the original backuped database. Or we can have builtin replication. Delta file have no pages changes since last backup ! It have only pages, changed during backup process. I.e. delta itself is useless on tagret machine. We need to convert incremental backup file into .delta format, put it into target machine and start merge process there. Or learn merge process to work with "delta" in nbackup format. |
Commented by: @asfernandes Nbackup primary backup is a database locked. What I described already works, if you leave the database locked. But there is no safe way to freeze a delta to transfer or archive it, without stoping the server. PS: I do not know what would be the overhead of leaving a db locked writing changes to deltas. |
Commented by: @hvlad Adriano> What I described already works, if you leave the database locked. Adriano> But there is no safe way to freeze a delta to transfer or archive it, without stoping the server. Adriano> PS: I do not know what would be the overhead of leaving a db locked writing changes to deltas. |
Commented by: @asfernandes Vlad, I don't understand what you mean... I guess it's my idea is totally wrong, as always. :-) Anyway, what I describe improves nbackup (that current can't do really incremental backups, I mean, from last backup - if you not start increment the backup level each time), and allows buitin or manual replication as well the others exotic things that we discussed in fb-architect (db branches, flashback). Moreover, with simple changes. |
Commented by: @hvlad Adriano> Vlad, I don't understand what you mean... Adriano> I guess it's my idea is totally wrong, as always. :-) Adriano> Anyway, what I describe improves nbackup (that current can't do really incremental backups, I mean, from last backup - if you not start increment the backup level each time), and allows buitin or manual replication as well the others exotic things that we discussed in fb-architect (db branches, flashback). I see nothing wrong with continuously incremented backup level. As for branches and flashback i already tald my opinion in fb-architect : it is not trivial (if possible) to implement branches and impossible to implement flashback queries using nbackup |
Commented by: @asfernandes > I see nothing wrong with continuously incremented backup level. This does not sound good (the best way) for me when you think about many (per hour, for example) levels or less time, for replication. > As for branches and flashback i already tald my opinion in fb-architect : it is not trivial (if possible) to implement branches and impossible to implement flashback queries using nbackup If you have a set of freezed files, you can always read all old page versions (flashback) or some old page versions (branches) from them. The trunk, is the individual last delta. Branches is very useful, for example, in the case you have a big development database (perhaps, copied from production) but want many developers have his own sandbox for work. |
Commented by: @hvlad > > I see nothing wrong with continuously incremented backup level. This ("not sound good") is not an argument ;) Do you have something more real ? > > As for branches and flashback i already tald my opinion in fb-architect : it is not trivial (if possible) to implement branches and impossible to implement flashback queries using nbackup > If you have a set of freezed files, you can always read all old page versions (flashback) or some old page versions (branches) from them. The trunk, is the individual last delta. Excuse me, but i don't want to repeat here what was already said in another place. Better to continue discussion in fb-architect, if you wish. |
Modified by: @dyemanovassignee: Alexander Peshkov [ alexpeshkoff ] => Vlad Khorsun [ hvlad ] Fix Version: 4.0 Alpha 1 [ 10731 ] |
Commented by: @hvlad Lets document changes done for v4 : 1. GUID-based physical backup - uses backup GUID of target database as a GUID of previous physical backup (instead of backup level) nbackup -B[ACKUP] <level>|<GUID> <source database> [<backup file>] 2.In-place restore to apply backup file to the target database nbackup -I[NPLACE] -R[ESTORE] <target database> <backup file> Example: a) get backup GUID of target database: b) produce incremental backup using backup GUID nbackup -B {8C519E3A-FC64-4414-72A8-1B456C91D82C} <source database> <backup file> c) apply increment to the target database nbackup -I -R <target database> <backup file> |
Modified by: @pavel-zotovstatus: Resolved [ 5 ] => Closed [ 6 ] |
Submitted by: Smirnoff Serg (wildsery)
Relate to CORE2990
Votes: 2
Borrow the experience of IB2007 and enhance the nbackup functionality.
For now, we can't take the "level 0" backup and put the "level 1" increment on it without copying both into new file. And after that can't put on result the "level 2" increment.
I suggest to create level 0 backup as RO database, and allow to put level 1 increment on same file.
Of course, when the DBA turn off RO, incremental backup denied for that database file.
The text was updated successfully, but these errors were encountered: