New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make it possible to restore compressed .nbk files without explicitly decompressing them [CORE4462] #4782
Comments
Modified by: @AlexPeshkoffassignee: Alexander Peshkov [ alexpeshkoff ] |
Commented by: @AlexPeshkoff Added switch -DEcompress with parameter containing command line decompressing .nbk file to stdout. Symbol @ in that command line is replaced with file name to be decompressed. If that symbol is not met in command line file name (with space before it) is appended to the end of command line. Certainly if command line contains >1 word it should be quoted according to your shell rules. Samples of how can be restored mentioned in description backup: |
Commented by: Sean Leyne (seanleyne) Wouldn't it make sense to also add a new switch to have the 'backup' step generate a compressed file directly, without the need to pipe the output? |
Commented by: @AlexPeshkoff I will check is it possible. If there is opened database at the time of creating backup, this causes some problems when forking. |
Commented by: Damyan Ivanov (dam) Hi, While piped decompression seems to be implemented in 3.0.x series (on POSIX systems), it has a bug which leads to an error when more than one file is given to -restore (e.g. multi-level restore is performed): $ nbackup -user sysdba -decompress 'gzip -dc' -restore test-restored.fdb test.nbk.0.gz test.nbk.1.gz The reason is that for any incremental file after the level 0 file, nbackup tries to skip a page-size block off the start of the file using seek() (nbackup.cpp line 1430), but because of the -decompress involved, the file handle is a pipe leading to the error in seek(). Here's a patch which emulates the seek with a series of read()s so that the file pointer ends up one page size from the start of the file. |
Modified by: Damyan Ivanov (dam)Attachment: nbackup-decompress-level1.patch [ 13201 ] |
Commented by: @dyemanov Should this ticket be closed now? AFAIU, v3.0.5 contains this improvement for both Posix and WIndows. |
Commented by: Sean Leyne (seanleyne) Dmitry, I would make 1 suggestion: define default switches (to decompress the files) for the 7z and zstd methods so that the only the "-de + algorythm" need be specified ie. "nbackup -r -de 7z {db filename} {nbackup filename1} ..." or "nbackup -r -de zstd {db filename} {nbackup filename1} ...", extra decompression options/details should only be require for special cases. |
Commented by: @AlexPeshkoff Sean, in current state that's bad suggestion. Why this 2 particular utilities? Others may prefer other tools. |
Commented by: @AlexPeshkoff @dmitry - yes, we can close it now. |
Modified by: @dyemanovstatus: Open [ 1 ] => Resolved [ 5 ] resolution: Fixed [ 1 ] Fix Version: 3.0.5 [ 10885 ] Fix Version: 4.0 Beta 2 [ 10888 ] |
Modified by: @pavel-zotovstatus: Resolved [ 5 ] => Resolved [ 5 ] QA Status: Done successfully Test Specifics: [Platform (Windows/Linux) specific] |
Modified by: @pavel-zotovstatus: Resolved [ 5 ] => Closed [ 6 ] |
Submitted by: @AlexPeshkoff
Jira_subtask_outward CORE4463
Attachments:
nbackup-decompress-level1.patch
Votes: 2
Ability to compress nbackup output on the fly helps to save both backup time and avoid need in disk space for intermediate uncompressed files:
nbackup -b 0 employee stdout | bzip2 >e.b0.bz2
nbackup -b 1 employee stdout | bzip2 >e.b1.bz2
Unforutnately that trick does not work when restore is needed cause a set of uncompressed files is required for utility:
nbackup -r e.fdb e.b0 e.b1
It would be great to make nbackup decompress files on the fly one by one, without wasting resources for intermediate files.
Commits: a79117c 3b43bc9 f165f6f f219283 81c4800 685b5f1 FirebirdSQL/fbt-repository@4bf1f13 FirebirdSQL/fbt-repository@a95c4f2
The text was updated successfully, but these errors were encountered: