New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Index corruption. Validation put "missing entries" message into firebird.log [CORE3515] #3873
Comments
Modified by: @hvladassignee: Vlad Khorsun [ hvlad ] |
Modified by: @hvladstatus: Open [ 1 ] => Resolved [ 5 ] resolution: Fixed [ 1 ] Fix Version: 2.5.1 [ 10333 ] Fix Version: 3.0 Alpha 1 [ 10331 ] |
Commented by: Sean Leyne (seanleyne) Edit for formatting/readability |
Modified by: Sean Leyne (seanleyne)description: CREATE TABLE T ( ALTER TABLE T ADD CONSTRAINT PK PRIMARY KEY ON (ID); INSERT INTO T VALUES (1, 0);
1. tx1: insert into T values (1, 0) 5. tx1: VIO_backout 7. tx2: insert into T values (2, 0) 13. tx1: IDX_garbage_collect
after (4) : recno = 1, PK : entry {key = 1, rec = 1}, IDX2 : no entries and finally we have missed entry at index IDX2 for record 1.
a) first insert violates indexed constraint (unique or foreign key)
updates too (first update failed, backout started and second update came at
all going index entries will be removed. This will prevent concurrent insert from
record from disk but mark it with current transaction number and flag rpb_gc_active. It seems safe as :
Zotov who reported the bug and made great help to investigate it. => Imagine a table T with two indices PK (unique, it is important) and IDX2 (in my case it is very bad index with 2 different values only but this is not so important). So, we have : CREATE TABLE T ( ALTER TABLE T ADD CONSTRAINT PK PRIMARY KEY ON (ID); INSERT INTO T VALUES (1, 0); The sequence of actions is following: 1. tx1: insert into T values (1, 0) 5. tx1: VIO_backout 7. tx2: insert into T values (2, 0) 13. tx1: IDX_garbage_collect
after (4) : recno = 1, PK : entry {key = 1, rec = 1}, IDX2 : no entries and finally we have missed entry at index IDX2 for record 1. The issue happens when all conditions are met at the same time : a) first insert violates indexed constraint (unique or foreign key) I think the problem could happen no only with two inserts, but with two updates too (first update failed, backout started and second update came at inappropriate moment). Also it seems possible to have "blob not found" errors by the same reason. To fix the issue i offer to delay physical record removal (in case of backout) until all going index entries will be removed. This will prevent concurrent insert from creating new record with the same record number until backout maintains indices. To do it, i offer to split backout on the two phases. At the first phase don't remove record from disk but mark it with current transaction number and flag rpb_gc_active. Then cleanup indices and after remove backed out record version completely. It seems safe as : a) marking record with current transaction number prevents concurrent backouts The patch passed my tests and also runs in production over month by the Pavel Zotov who reported the bug and made great help to investigate it. |
Commented by: @hvlad Backported into v2.1.5 |
Modified by: @hvladFix Version: 2.1.5 [ 10420 ] |
Modified by: @pcisarstatus: Resolved [ 5 ] => Closed [ 6 ] |
Modified by: @pavel-zotovQA Status: No test |
Modified by: @pavel-zotovstatus: Closed [ 6 ] => Closed [ 6 ] QA Status: No test => Cannot be tested |
Submitted by: @hvlad
Relate to CORE3921
Imagine a table T with two indices PK (unique, it is important) and IDX2 (in my case it is very bad index with 2 different values only but this is not so important).
So, we have :
CREATE TABLE T (
ID INT NOT NULL,
VAL INT
);
ALTER TABLE T ADD CONSTRAINT PK PRIMARY KEY ON (ID);
CREATE INDEX IDX2 ON T (VAL);
INSERT INTO T VALUES (1, 0);
COMMIT;
The sequence of actions is following:
1. tx1: insert into T values (1, 0)
2. tx1: VIO_store
returns OK, new record have recno = 1
3. tx1: IDX_store
4. tx1: insert_key
index == PK, key == 1, recno == 1
returns duplicate error
i.e. we have unique violation in index PK
note, there was no attempt to insert key into index IDX2
5. tx1: VIO_backout
6. tx1: delete_record ... OK
7. tx2: insert into T values (2, 0)
8. tx2: VIO_store ...
returns OK, new record have recno = 1, yes same recno !!!
9. tx2: IDX_store
10. tx2: insert_key
index == PK, key == 2, recno == 1
returns OK
11. tx2: insert_key ... returns OK
index == IDX2, key == 0, recno == 1
returns OK
12. tx2: commit
13. tx1: IDX_garbage_collect
14. tx1: BTR_remove
index == PK, key == 1, recno == 1
15. tx1: BTR_remove
index == IDX2, key == 0, recno == 1
here we removed not ours index entry !!!
after (4) : recno = 1, PK : entry {key = 1, rec = 1}, IDX2 : no entries
after (6) : no records, PK : entry {1, 1}, IDX2 : no entries
after (12) : recno = 1, PK : entries {1, 1} and {2, 1}, IDX2 : entry {0, 1}
after (14) : recno = 1, PK : entry {2, 1}, IDX2 : entry {0, 1}
after (15) : recno = 1, PK : entry {1, 1}, IDX2 : no entries
and finally we have missed entry at index IDX2 for record 1.
The issue happens when all conditions are met at the same time :
a) first insert violates indexed constraint (unique or foreign key)
b) this indexed constraint is not a physically last index, so some indices have no entries for the failed record
c) VIO_backout deleted record and at the same time new record is inserted at the same slot on data page and have assigned the same record number
d) at least one index from the second group in (b) have the same key value in the new record as in failed record
e) second insert completes before backout started to remove index entries of failed record
I think the problem could happen no only with two inserts, but with two updates too (first update failed, backout started and second update came at inappropriate moment). Also it seems possible to have "blob not found" errors by the same reason.
To fix the issue i offer to delay physical record removal (in case of backout) until all going index entries will be removed. This will prevent concurrent insert from creating new record with the same record number until backout maintains indices.
To do it, i offer to split backout on the two phases. At the first phase don't remove record from disk but mark it with current transaction number and flag rpb_gc_active. Then cleanup indices and after remove backed out record version completely.
It seems safe as :
a) marking record with current transaction number prevents concurrent backouts
b) marking record with rpb_gc_active flag allow readers to skip this record version and read previous one (which will be primary record version after backout completes)
c) if our process will die during backout, next process will see our transaction as dead and will start all over again
The patch passed my tests and also runs in production over month by the Pavel Zotov who reported the bug and made great help to investigate it.
Commits: af4fab8 5ac9733 945a1bd
The text was updated successfully, but these errors were encountered: