New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
More robust gbak [CORE2338] #2762
Comments
Modified by: Don Young (clxbase)description: I've ever developed a high fequency data updating system, based on FireBird 2.04. This system has a very high frequency for data inserting and updating. The size of the db file growed rapidly, however, this is not the real trouble. I backup and resore the database periodically, using gbak, which would keep the database at a high speed. Till one day, during restoring, gbak reported that it found duplicated primary keys, and refuse to work, all the data in the table was lost. We've tried all kind of methods, still cannot get back the data, only because it has duplicated keys. Why not make gbak more fault-tolerance? I think it would be much better that gbak port the duplicated data into a external log file and go on working, than just stop there and refuse to do anything. => I've ever developed a database system, based on FireBird 2.04. This system has a very high frequency for data inserting and updating. The size of the db file growed rapidly, however, this is not the real trouble. I backup and resore the database periodically, using gbak, which would keep the database at a high speed. Till one day, during restoring, gbak reported that it found duplicated primary keys, and refuse to work, all the data in the table was lost. We've tried all kind of methods, still cannot get back the data, only because it has duplicated keys. Why not make gbak more fault-tolerance? I think it would be much better that gbak should port the problematic data into a external log file when met errors, than just stop there and refuse to do anything rest. |
Commented by: @AlexPeshkoff Did you try -I (deactivate indexes during restore) switch? |
Commented by: @WarmBooter IB BackupSurgeon utility, from IBSurgeon, can help with this too. |
Commented by: Sean Leyne (seanleyne) In normal operation it is not possible for a duplicate primary key to be added to a database. This suggests that you are not performing your backup/restore cycle correctly, and allowing the restore database to be accessed while the restore process is running. This is the only way for duplicate primary keys to be created within the database (the constraint must be disabled, and a primary key constraint is only disabled during a restore cycle). This case seems to be a "won't fix" case, as you are not following reasonable processes. |
Commented by: @AlexPeshkoff Sean, you are almost correct - except cases called |
Commented by: Cosmin Apreutesei (cosmin_ap2) there's another request somewhere around here to make gbak validate data on backup, not restore -- you may want to vote for that :) i got my fair share of trash backups myself -- learned my lesson and now I only do SQL dumps with firebird. nothing beats plain text data. |
Commented by: Claudio Valderrama C. (robocop) Did you try the -ONE switch in gbak when restoring? |
Commented by: @pmakowski I agree with Alex For example if you are using PLAN referencing indexes into SP, you can't use the -I switch :( |
Submitted by: Don Young (clxbase)
Votes: 4
I've ever developed a database system, based on FireBird 2.04. This system has a very high frequency for data inserting and updating. The size of the db file growed rapidly, however, this is not the real trouble. I backup and resore the database periodically, using gbak, which would keep the database at a high speed. Till one day, during restoring, gbak reported that it found duplicated primary keys, and refuse to work, all the data in the table was lost. We've tried all kind of methods, still cannot get back the data, only because it has duplicated keys. Why not make gbak more fault-tolerance? I think it would be much better that gbak should port the problematic data into a external log file when met errors, than just stop there and refuse to do anything rest.
The text was updated successfully, but these errors were encountered: