New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
More problems with transaction numbers overflowing 32-bit signed integer and corrupting database [CORE2348] #2771
Comments
Modified by: Ertan Altekin (altekin)priority: Major [ 3 ] => Critical [ 2 ] Version: 2.1.1 [ 10223 ] Component: Engine [ 10000 ] |
Commented by: @hvlad Why do you clone old closed ticket ? |
Commented by: Ertan Altekin (altekin) should I open a new ticket for the same bug? |
Commented by: @hvlad Do you have reproducible test case ? As you provided ZERO info i don't see what to do with it |
Commented by: @hvlad Additional fixes for TPC was committed |
Modified by: @hvladstatus: Open [ 1 ] => Resolved [ 5 ] resolution: Fixed [ 1 ] Fix Version: 2.5 Beta 1 [ 10251 ] Fix Version: 2.0.0 [ 10091 ] => Fix Version: 1.5.4 [ 10100 ] => |
Modified by: @dyemanovsummary: CLONE -Transaction numbers can overflow 32-bit signed integer and corrupt database => More problems with transaction numbers overflowing 32-bit signed integer and corrupting database |
Commented by: Ertan Altekin (altekin) I tested the fix (2.5 Beta 1), it works (as workaround) but if transaction limit exceeded, is backup not possible. |
Commented by: @hvlad Does you read error message and made database read-only before backing it up ? |
Commented by: Ertan Altekin (altekin) OK, my mistake. It works with read-only database. thx. |
Commented by: @dyemanov I would rather prefer to be able to wrap the 32-bit value and reuse the values. |
Commented by: @hvlad Ertan Altekin> Is it possible tx-number as Int64 to implement? (to avoid backup/restore) Dmitry Yemanov> I would rather prefer to be able to wrap the 32-bit value and reuse the values. |
Commented by: @hvlad Backported into 2.1.3 |
Modified by: @hvladFix Version: 2.1.3 [ 10302 ] |
Commented by: @livius2 What do this fix - transaction numbers are reused as Dimitry Yemanov post? I ask about this on support group but they tell me that don't know what exactly do this fix. |
Commented by: @hvlad No, transaction numbers are not reused. This fix was about correct handling of case when tx numbers are close to limit. |
Commented by: @livius2 Is in plan to solve this at all? I have system with ~18 000 000 per day and counter reach limit after 4 month of continuous work. |
Commented by: @hvlad Currently we have intention to make transaction numbers unsigned in FB3. It will make max tra num two times lager then now. As for your system - you already was suggested in support list to start less transactions. This is much much better solution for your system performance. Amount of data handled by modern system have no correllation with number of transactions necessary to handle data. |
Commented by: @livius2 I follow suggestions and start readonly transaction on client side to do some reports fink about this more understandable sample of my system you have home and you have alarm system. extend this sample to bigger place must have more devices |
Commented by: @hvlad > To guard your home you must have 10 devices which send data in 1 minute interval (some send in 1 minute some in 10) Sorry, but this is very, very naive (at least) to store every signal using separate transaction... This approach could kill performance of any DBMS |
Commented by: @livius2 Performance is not my problem - this work very fast with FB2.1.x but this work on Ram Disk. and you say naive to run transaction on every signal |
Commented by: @hvlad > Performance is not my problem - this work very fast with FB2.1.x but this work on Ram Disk. > and you say naive to run transaction on every signal > how can you do this without transaction in transactional database? |
Commented by: @livius2 >>Because you have performance PROBLEM using HDD's, isn't is ? Ok you say join signals on server side - this was analyzed earlier. I simplify this sample to show some situation. When we receive signal (driver based) we analyze it and go inside database to check something and take some specific action on it. Also we try with multi-tier architecture but "updates conflicts" is a problem - rerun simple transaction is simple and time costs is smaller think also abut web development and you really need to complicate your system to solve this limitation? |
Commented by: @hvlad Karol, believe me or not, but not all tasks should be implemented as it sounds at first glance. As for web development... Do you know many shops with 1000000 tx per minute ? |
Commented by: @livius2 Vlad, i must ask - how many signals have this systems which you see - they test this in growing load? About web developement i do not say 1000000 per minute only per day ;-) and in my opinion they loss money because of long start on real world only we got limit of some counter ;-) and about this this is not possible i think but with 64 bit enumerator you must start |
Commented by: @livius2 I rethink this and simplify - (no need of 0 transaction occurence). I have solution for this problem reuse transaction id feature. When oldest active transaction id reach e.g. 1 500 000 001 value (shuld be big as possible but have also big difference to max integer value). one more modification - code which get now most recent record versions should check if "reset flag"=2 ------------------------------------------------------------------------- |
Commented by: @hvlad Karol, yes, you can be sure it was read. But. If you want discussion - use fb-devel to do it. Tracker is not appropriate place. |
Modified by: @pavel-zotovQA Status: No test |
Modified by: @pavel-zotovstatus: Resolved [ 5 ] => Resolved [ 5 ] QA Status: No test => Cannot be tested |
Submitted by: Ertan Altekin (altekin)
Is related to QA30
Is related to CORE1042
Is related to QA229
Commits: 6db905f b482b15
The text was updated successfully, but these errors were encountered: