Issue Details (XML | Word | Printable)

Key: CORE-2216
Type: Improvement Improvement
Status: Closed Closed
Resolution: Fixed
Priority: Major Major
Assignee: Vlad Khorsun
Reporter: Smirnoff Serg
Votes: 2
Watchers: 5
Operations

If you were logged in you would be able to see more operations.
Firebird Core

Nbackup as online dump

Created: 28/Nov/08 05:40 AM   Updated: 24/Jun/17 08:38 PM
Component/s: NBACKUP
Affects Version/s: None
Fix Version/s: 4.0 Alpha 1

Issue Links:
Relate
 

Target: 3.0 RC2
QA Status: Done successfully


 Description  « Hide
Borrow the experience of IB2007 and enhance the nbackup functionality.

For now, we can't take the "level 0" backup and put the "level 1" increment on it without copying both into new file. And after that can't put on result the "level 2" increment.
I suggest to create level 0 backup as RO database, and allow to put level 1 increment on same file.

Of course, when the DBA turn off RO, incremental backup denied for that database file.

 All   Comments   Change History   Subversion Commits      Sort Order: Ascending order - Click to sort in descending order
Sean Leyne added a comment - 28/Nov/08 01:14 PM
I do not understand the details of the case, could you please elaborate.

Alexander Peshkov added a comment - 28/Nov/08 01:20 PM
I' ve understood that it's desired to add a switch, making possible to create level-0 backup exactly in format of firebird database file. This can seriously speed-up restore, cause level 0 backup is typically big file, and restoring it takes much more time compared with applying over it level 1, 2, etc.
What I also do not understand is why should it be RO.

Adriano dos Santos Fernandes added a comment - 28/Nov/08 01:30 PM
I suggest you read fb-achitect thread named NBAK started on 11-apr-2008 before extend nbackup with simple new switches.

In fact, what the user suggested is a way to merge increments with the initial backup. As you see in the discussion, a similar approach may be used but for much more interesting things.

Smirnoff Serg added a comment - 28/Nov/08 01:37 PM
Yes, i'd asking something like "page-level online replication (dump)".

Adriano can i read that thread or access to fb-architect is denied? Please give me the link.

Adriano dos Santos Fernandes added a comment - 28/Nov/08 01:46 PM
Just subscribe to the group and read the archives: http://groups.yahoo.com/group/Firebird-Architect/

Nickolay Samofatov added a comment - 03/Dec/08 10:20 PM
We discussed this issue with Vlad and Dmitry before. One way to do it would be to convert incremental backup to .delta file format, and than make engine do online merge of delta into RO database. The merge would appear instantaneous and atomic to the online readers of the database, and consistency issues should not arise.

Vlad Khorsun added a comment - 04/Dec/08 06:45 AM
> We discussed this issue with Vlad and Dmitry before. One way to do it would be to convert incremental backup to .delta file format, and than make engine do online merge of delta into RO database. The merge would appear instantaneous and atomic to the online readers of the database, and consistency issues should not arise.

Or just copy .delta file into specified location when merge process on backed up database is done.
For example we can extend syntax ALTER DATABASE END BACKUP with optional clause SAVE DELTA TO <file_name>.
On the target read-only database we can run something like ALTER DATABASE MERGE DELTA <file_name> which will check if specified .delta file may be applied to this database and merge its contents into main database file.

Adriano dos Santos Fernandes added a comment - 04/Dec/08 06:51 AM
I think nested BEGIN BACKUPS should be allowed. In this case, it will be really incremental backup and user can use previous delta as binlogs.

And of course, a way to merge deltas in the database, and maybe delta+delta files.

Alexander Peshkov added a comment - 04/Dec/08 07:46 AM
It seems to me that what you can get here is at least for 95% will duplicate database SHADOW feature. The main dufference is that in case of using shadows we have no delay in replication, i.e. when COMMIT finishes data is stored in both primary database file and it's shadow.
Therefore - why duplicate old working features?
I understand very well that an ability to have incremental set of backups is very useful thing. But as soon as we start to merge them as soom as possible, the whole odea of incremental backup becomes broken - we can't recover logical failures, happened during last day, because we do not have yesterday's state of database. Exactly like with shadows.

Vlad Khorsun added a comment - 04/Dec/08 07:59 AM
Adriano, i don't understand what yo uwrote about nested BEGIN BACKUP's. Probably its a time to move discussion into more appropriate place (fb-architect)

Alex, do you able to place shadow at another machine *and* attach it by another Firebird instance runnig on that machine ?

Alexander Peshkov added a comment - 04/Dec/08 08:09 AM
do you able to place shadow at another machine
Yes.

*and* attach it by another Firebird instance runnig on that machine
No.

Certainly, for task2 incremental backup is preferred. But I must mention that in such case deltas should be merged by engine on that second machine. I.e. all we need is to learn to merge deltas when database (certainly, RO) is online?

Vlad Khorsun added a comment - 04/Dec/08 08:09 AM
--------
Or just copy .delta file into specified location when merge process on backed up database is done.
For example we can extend syntax ALTER DATABASE END BACKUP with optional clause SAVE DELTA TO <file_name>.
--------
Its stupid, forget about it

Adriano dos Santos Fernandes added a comment - 04/Dec/08 08:10 AM
First begin backup creates a delta file.

Second begin backup creates another delta file. The first one, as well as the database is not touched by the engine anymore. It (first delta) can be manually transfered to another machine and used with the original backuped database. Or we can have builtin replication.

Vlad Khorsun added a comment - 04/Dec/08 08:12 AM
Alex> Certainly, for task2 incremental backup is preferred. But I must mention that in such case deltas should be merged by engine on that second machine. I.e. all we need is to learn to merge deltas when database (certainly, RO) is online?

Exactly, as Nickolay wrote. And we need to convert backup file into delta format. Transparent for user.

Vlad Khorsun added a comment - 04/Dec/08 08:15 AM
Adriano,

----
First begin backup creates a delta file.

Second begin backup creates another delta file. The first one, as well as the database is not touched by the engine anymore. It (first delta) can be manually transfered to another machine and used with the original backuped database. Or we can have builtin replication.
---

Delta file have no pages changes since last backup ! It have only pages, changed during backup process. I.e. delta itself is useless on tagret machine. We need to convert incremental backup file into .delta format, put it into target machine and start merge process there. Or learn merge process to work with "delta" in nbackup format.

Adriano dos Santos Fernandes added a comment - 04/Dec/08 08:48 AM
Nbackup primary backup is a database locked.

What I described already works, if you leave the database locked. But there is no safe way to freeze a delta to transfer or archive it, without stoping the server.

PS: I do not know what would be the overhead of leaving a db locked writing changes to deltas.

Vlad Khorsun added a comment - 04/Dec/08 08:56 AM
Adriano> What I described already works, if you leave the database locked.
Oh...

Adriano> But there is no safe way to freeze a delta to transfer or archive it, without stoping the server.
So, you abandon this idea ? ;)

Adriano> PS: I do not know what would be the overhead of leaving a db locked writing changes to deltas.
First time, while .delta is small, writes into it will be faster than into database file.

Adriano dos Santos Fernandes added a comment - 04/Dec/08 09:09 AM
Vlad, I don't understand what you mean... I guess it's my idea is totally wrong, as always. :-)

Anyway, what I describe improves nbackup (that current can't do really incremental backups, I mean, from last backup - if you not start increment the backup level each time), and allows buitin or manual replication as well the others exotic things that we discussed in fb-architect (db branches, flashback).

Moreover, with simple changes.

Vlad Khorsun added a comment - 04/Dec/08 09:30 AM
Adriano> Vlad, I don't understand what you mean...
:(

Adriano> I guess it's my idea is totally wrong, as always. :-)
We trying to find best possible solution, so every idea is welcome. May be its me who not understand something ;)

Adriano> Anyway, what I describe improves nbackup (that current can't do really incremental backups, I mean, from last backup - if you not start increment the backup level each time), and allows buitin or manual replication as well the others exotic things that we discussed in fb-architect (db branches, flashback).

I see nothing wrong with continuously incremented backup level.

As for branches and flashback i already tald my opinion in fb-architect : it is not trivial (if possible) to implement branches and impossible to implement flashback queries using nbackup

Adriano dos Santos Fernandes added a comment - 04/Dec/08 09:52 AM
> I see nothing wrong with continuously incremented backup level.

This does not sound good (the best way) for me when you think about many (per hour, for example) levels or less time, for replication.

> As for branches and flashback i already tald my opinion in fb-architect : it is not trivial (if possible) to implement branches and impossible to implement flashback queries using nbackup

If you have a set of freezed files, you can always read all old page versions (flashback) or some old page versions (branches) from them. The trunk, is the individual last delta.

Branches is very useful, for example, in the case you have a big development database (perhaps, copied from production) but want many developers have his own sandbox for work.

Vlad Khorsun added a comment - 04/Dec/08 10:11 AM
> > I see nothing wrong with continuously incremented backup level.
>
> This does not sound good (the best way) for me when you think about many (per hour, for example) levels or less time, for replication.

This ("not sound good") is not an argument ;) Do you have something more real ?

> > As for branches and flashback i already tald my opinion in fb-architect : it is not trivial (if possible) to implement branches and impossible to implement flashback queries using nbackup

> If you have a set of freezed files, you can always read all old page versions (flashback) or some old page versions (branches) from them. The trunk, is the individual last delta.
>
> Branches is very useful, for example, in the case you have a big development database (perhaps, copied from production) but want many developers have his own sandbox for work.

Excuse me, but i don't want to repeat here what was already said in another place. Better to continue discussion in fb-architect, if you wish.

Vlad Khorsun added a comment - 11/Jun/17 10:38 PM
Lets document changes done for v4 :

1. GUID-based physical backup - uses backup GUID of target database as a GUID of previous physical backup (instead of backup level)

nbackup -B[ACKUP] <level>|<GUID> <source database> [<backup file>]

2.In-place restore to apply backup file to the target database

nbackup -I[NPLACE] -R[ESTORE] <target database> <backup file>


Example:

a) get backup GUID of target database:
gstat -h <target database>
...
    Variable header data:
        Database backup GUID: {8C519E3A-FC64-4414-72A8-1B456C91D82C}


b) produce incremental backup using backup GUID

nbackup -B {8C519E3A-FC64-4414-72A8-1B456C91D82C} <source database> <backup file>


c) apply increment to the target database

nbackup -I -R <target database> <backup file>