Author Topic: What is the Reason For 0.4.25 Forks ?  (Read 8596 times)

0 Members and 1 Guest are viewing this topic.

Offline vikram



Quote from: vikram link=topic=12208.msg162170#msg162170

We have not done a thorough analysis or inventory, but there were a number of transactions (not many--definitely less than 200 in the entire blockchain) that were duplicates under the new rules yet valid under the old rules. That is to say, they had exactly the same operations and expiration time but different signatures as you said. Almost all of such transactions were quite old; there was at least one extremely recent one but it was very small and did not look like an attack. It is difficult to say whether anything else appeared as an attack without doing a full review of the offending transactions. It is not a priority, but I can take a closer look if you think that is important.

can you post the duplicate transactions here so anybody can review them ?

If so, does the new code allow for such possibilities in the future (using some nonce or something) or does the new client somehow construct the transactions to not ever be identical?

In practice, nothing really changes for the end-user because subsequent identical transactions they construct will likely have different expiration dates. I do not know how those previously mentioned transactions with identical operations and expiration dates were created. There is also an additional reserved field in transactions that is currently not used, but could include a nonce if this ever becomes a problem in the future.

@Virkram, can you point me to one pair of those transaction?
They were TITAN transactions or simple ones?

I've posted a list of the duplicates here: https://github.com/BitShares/bitshares/issues/1142#issuecomment-68958340

Offline ElMato

  • Sr. Member
  • ****
  • Posts: 288
    • View Profile
If so, does the new code allow for such possibilities in the future (using some nonce or something) or does the new client somehow construct the transactions to not ever be identical?

In practice, nothing really changes for the end-user because subsequent identical transactions they construct will likely have different expiration dates. I do not know how those previously mentioned transactions with identical operations and expiration dates were created. There is also an additional reserved field in transactions that is currently not used, but could include a nonce if this ever becomes a problem in the future.

@Virkram, can you point me to one pair of those transaction?
They were TITAN transactions or simple ones?




merockstar

  • Guest
I remember a thread a few months back where a guy created an account to come say he had accidentally made bitshares out of thin air, and the community kind of received it with skepticism.

trying to find the thread now but it's elusive.

I wonder if the one duplicate was that guy.

Offline vikram



Quote from: vikram link=topic=12208.msg162170#msg162170

We have not done a thorough analysis or inventory, but there were a number of transactions (not many--definitely less than 200 in the entire blockchain) that were duplicates under the new rules yet valid under the old rules. That is to say, they had exactly the same operations and expiration time but different signatures as you said. Almost all of such transactions were quite old; there was at least one extremely recent one but it was very small and did not look like an attack. It is difficult to say whether anything else appeared as an attack without doing a full review of the offending transactions. It is not a priority, but I can take a closer look if you think that is important.

can you post the duplicate transactions here so anybody can review them ?



Sent from my ALCATEL ONE TOUCH 997D

Sure, I can compile a list as part of: https://github.com/BitShares/bitshares/issues/1142

Offline liondani

  • Hero Member
  • *****
  • Posts: 3737
  • Inch by inch, play by play
    • View Profile
    • My detailed info
  • BitShares: liondani
  • GitHub: liondani


Quote from: vikram link=topic=12208.msg162170#msg162170

We have not done a thorough analysis or inventory, but there were a number of transactions (not many--definitely less than 200 in the entire blockchain) that were duplicates under the new rules yet valid under the old rules. That is to say, they had exactly the same operations and expiration time but different signatures as you said. Almost all of such transactions were quite old; there was at least one extremely recent one but it was very small and did not look like an attack. It is difficult to say whether anything else appeared as an attack without doing a full review of the offending transactions. It is not a priority, but I can take a closer look if you think that is important.

can you post the duplicate transactions here so anybody can review them ?



Sent from my ALCATEL ONE TOUCH 997D


Offline arhag

  • Hero Member
  • *****
  • Posts: 1214
    • View Profile
    • My posts on Steem
  • BitShares: arhag
  • GitHub: arhag
This fix was rushed out because once the existence of the security flaw was publicly disclosed, we felt under extreme time pressure because any technically competent attacker reading our public discussions would have been able to use the unpatched flaw to do serious damage.

We need to do a better job of telling all internal developers and community contributors not to discuss security vulnerabilities over public channels until a fix is deployed.  A little more caution along these lines would quite possibly have resulted in a patch developed over a longer time frame, with more thorough testing, under less pressure.

Yeah, things could have gone better in how this was handled. Some suggestions:
  • Once bytemaster realized there was a vulnerability thanks to ElMato's question submitted to the forum on Wednesday, December 10, he probably shouldn't have responded to that thread with a post containing content identical to the OP in https://github.com/BitShares/bitshares/issues/1129. Just speaking for myself, I didn't realize there was a vulnerability when reading ElMato's question, but when I read bytemaster's post I immediately realized the extent of this vulnerability (and I am sure so would many other technical people who happened to read that thread). Thankfully, the situation was corrected quickly by deleting the thread, but it would have been better if bytemaster never made that post, or revealed it on GitHub either, until the fix was completed, well tested, and ready to be pushed to the delegates.

  • GitHub revealed too much information throughout this whole process. Although the forum thread was short lived, bytemaster's post (https://github.com/BitShares/bitshares/issues/1129) to GitHub on Thursday 2:44am EST revealed it again to the public and this time stayed up for all to see. Although that post was accompanied with a fix (https://github.com/BitShares/bitshares/commit/d0c6fa174ee88b706e6c08a102cde5bddfe4212d) to the canonical signature issue, there were still two problems with this. First, the canonical signature issue wasn't an actual fix for the bigger issue of transaction malleability, since duplicate transactions with unique IDs could still be created using superfluous signatures. Further discussion with the dev team and spending some time thinking it through would have revealed this because eventually they came to that conclusion and fixed the problem properly with this fix (https://github.com/BitShares/bitshares/commit/ad4c5d851b90843d1a20d8ca9ab410baf5aafb1b) 9 hours later (well the upgrade code was messed up still, but at least the validation code for after block 1249400 actually solved the transaction malleability vulnerability properly). Second, even with these commits, the code was still not done and certainly not properly tested. My point with all of this is that the reveal of the vulnerability on GitHub through the issue submission and these commits was premature for a security vulnerability. I don't think these things should be revealed publicly on GitHub until a proper, well-tested fix is ready to go for the delegates to download. Discussion and development for a fix to a critical security issue should be done in private IMO. Does GitHub provide a private mode so that the internal devs can still use these useful tools without leaking information to the public prematurely? They can always reveal the issues/discussion/commits to the public later on after the well-tested upgrade that fixes the issue is ready to go for the delegates.

I don't intend to point fingers or assign blame with this post. I just would like us to figure out the proper procedures to follow for the future in case another security vulnerability is discovered. We should of course also discuss what the proper standards for testing should be for fixes to important security vulnerabilities (and the differences in standards for different classes of vulnerabilities, depending on severity). A balance must be struck with the testing standards between getting functional-enough code out that fixes the issue (and doesn't have bugs that make the issue worse) and minimizing the time that the vulnerable version of the software is left running in the open.

Offline xeroc

  • Board Moderator
  • Hero Member
  • *****
  • Posts: 12922
  • ChainSquad GmbH
    • View Profile
    • ChainSquad GmbH
  • BitShares: xeroc
  • GitHub: xeroc
So .. no it's about time to have a public mail address for security issues to report .. that mail address HAS to be publicly accessible ..
put it
 - in the footer of bitshares.org
 - the wiki
 - the subreddits
 - the client

Also please offer a bounty!

Offline bytemaster

Transaction time is there so that various caches are bounded. 
For the latest updates checkout my blog: http://bytemaster.bitshares.org
Anything said on these forums does not constitute an intent to create a legal obligation or contract between myself and anyone else.   These are merely my opinions and I reserve the right to change them at any time.

Offline vikram

I suppose it could reject any transaction that has an expiration time set more than 48 hours into the future relative to the present time (time at which the delegate's client is considering the transaction for inclusion in the next block), although I don't really understand the purpose of that limitation.

But what I mean is couldn't I today create and sign (but not broadcast) a transaction that spends some of my balance and is set to expire in January 2, 2015, and then wait until January 1, 2015 to broadcast it and have it accepted into the blockchain?

Both of these are correct. I believe the intention is simply that a transaction cannot be signed and then float around on the P2P network for an unbounded amount of time without being applied for some reason. bytemaster can chime in on the original motivation if I am missing something.

Offline arhag

  • Hero Member
  • *****
  • Posts: 1214
    • View Profile
    • My posts on Steem
  • BitShares: arhag
  • GitHub: arhag
There is an upper bound of 48 hours that the blockchain allows.

How is it possible for the blockchain to have a limit on the expiration time when it cannot possibly know when the transaction was actually created? I suppose it could reject any transaction that has an expiration time set more than 48 hours into the future relative to the present time (time at which the delegate's client is considering the transaction for inclusion in the next block), although I don't really understand the purpose of that limitation. But what I mean is couldn't I today create and sign (but not broadcast) a transaction that spends some of my balance and is set to expire in January 2, 2015, and then wait until January 1, 2015 to broadcast it and have it accepted into the blockchain?


Offline theoretical

However, the 3 of us that put together the upgrade all still managed to completely overlook this problem due to the stress of trying to fix the malleability security issue as fast as possible. The moral of the story is--security issue or not--we need a better process for live upgrades:

We are reviewing our options, but the conclusion is clear.   DevShares needs to come out as soon as possible because it is the most likely way we would have caught this bug.

This fix was rushed out because once the existence of the security flaw was publicly disclosed, we felt under extreme time pressure because any technically competent attacker reading our public discussions would have been able to use the unpatched flaw to do serious damage.

We need to do a better job of telling all internal developers and community contributors not to discuss security vulnerabilities over public channels until a fix is deployed.  A little more caution along these lines would quite possibly have resulted in a patch developed over a longer time frame, with more thorough testing, under less pressure.

The next time a security vulnerability is disclosed before a patch is available, we need to consider more carefully the risks of a botched fix when deciding how much time to spend writing / testing a patch.  We don't want the fix to cause more damage than the exploit.

We also need to address technical debt in the testing side of things.  DevShares will help; so will more tests and improvements to our testing infrastructure.  I've been working on the latter.

I think there's little value in assigning blame to particular people.  I'm pretty sure that those who screwed up (myself included) have realized it and have figured out what they need to do differently next time.  Overall I'm not even sure we can attribute human error as the root cause, rather it's lack of a well-defined process for dealing with security vulnerabilities.  Also, a  process for code review and testing of release candidates would be useful.
« Last Edit: December 15, 2014, 12:48:48 am by drltc »
BTS- theoretical / PTS- PZxpdC8RqWsdU3pVJeobZY7JFKVPfNpy5z / BTC- 1NfGejohzoVGffAD1CnCRgo9vApjCU2viY / the delegate formerly known as drltc / Nothing said on these forums is intended to be legally binding / All opinions are my own unless otherwise noted / Take action due to my posts at your own risk

Offline xeroc

  • Board Moderator
  • Hero Member
  • *****
  • Posts: 12922
  • ChainSquad GmbH
    • View Profile
    • ChainSquad GmbH
  • BitShares: xeroc
  • GitHub: xeroc
Looking at the initial theft report at https://bitsharestalk.org/index.php?topic=10877.msg143174#msg143174 it seems that his private keys were compromised. This is not related to the transaction malleability issue that 0.4.25 and 0.4.26 were intended to address.
That's how I read it too .. can't see how the community/developers can help with that case ..
I still feel sorry for his loss .. 1M BTS is a shitoad of money :(

Offline vikram

I can understand it is not a priority since there doesn't seem to be anyone in the community claiming they had any funds stolen or unauthorized withdrawals
, but it is incredibly peculiar that there were duplicates (assuming the expiration time is always set to a fixed offset of the current time) and it would be interesting to investigate further.
That's quite not true! We have at least one official case with stolen funds...( from @educatewarrior for 1million BTS!!! ) I would seriously investigate it further if it is some how related!!!  And if yes, our community  MUST help the victims  to  get their "stolen"(?) funds back ! For every problem,  exist at least on solution... I am very happy we found the security  flow soon enough ...


Sent from my ALCATEL ONE TOUCH 997D

Looking at the initial theft report at https://bitsharestalk.org/index.php?topic=10877.msg143174#msg143174 it seems that his private keys were compromised. This is not related to the transaction malleability issue that 0.4.25 and 0.4.26 were intended to address.

Offline carpet ride

  • Hero Member
  • *****
  • Posts: 544
    • View Profile

That's quite not true! We have at least one official case with stolen funds...( from @educatewarrior for 1million BTS!!! ) I would seriously investigate it further if it is some how related!!!  And if yes, our community  MUST help the victims  to  get their "stolen"(?) funds back ! For every problem,  exist at least on solution... I am very happy we found the security  flow soon enough ...


What proof was there?
All opinions are my own. Anything said on this forum does not constitute an intent to create a legal obligation between myself and anyone else.
Check out my blog: http://CertainAssets.com
Buy the ticket, take the ride.

Offline liondani

  • Hero Member
  • *****
  • Posts: 3737
  • Inch by inch, play by play
    • View Profile
    • My detailed info
  • BitShares: liondani
  • GitHub: liondani
I can understand it is not a priority since there doesn't seem to be anyone in the community claiming they had any funds stolen or unauthorized withdrawals
, but it is incredibly peculiar that there were duplicates (assuming the expiration time is always set to a fixed offset of the current time) and it would be interesting to investigate further.
That's quite not true! We have at least one official case with stolen funds...( from @educatewarrior for 1million BTS!!! ) I would seriously investigate it further if it is some how related!!!  And if yes, our community  MUST help the victims  to  get their "stolen"(?) funds back ! For every problem,  exist at least on solution... I am very happy we found the security  flow soon enough ...


Sent from my ALCATEL ONE TOUCH 997D

Offline vikram

I guess whether I am worried about it or not depends on how the expiration time is calculated. Can the expiration time be left unset in the transactions? Did the client at any point in time in BitShares history leave the expiration time blank? Does the client by default use some fixed offset after the time the transaction is constructed as the expiration time? Is the granularity of the expiration time 10 seconds or less?

If the old client didn't set an expiration time, then it is totally understandable why there could be multiple duplicates (in the sense that they are the same operation) that are each legitimate. But if the client always sets an expiration time and uses a fixed offset from the current time as the expiration time, I find the probability of creating a transaction with the same operations and same expiration time to be incredibly unlikely. I can understand it is not a priority since there doesn't seem to be anyone in the community claiming they had any funds stolen or unauthorized withdrawals, but it is incredibly peculiar that there were duplicates (assuming the expiration time is always set to a fixed offset of the current time) and it would be interesting to investigate further.

I would have to do some investigation to be absolutely certain, but I am fairly sure that we have always had the expiration time and it must always be set to a legitimate value.

The granularity is in seconds, and the current wallet uses 1 hour from the time of construction by default. This default was changed in the past at least twice I believe. There is an upper bound of 48 hours that the blockchain allows. There is also the "wallet_set_transaction_expiration_time" which users can use to change their wallet's default.

As you said, this is not a priority without any reports of theft, but I agree it is quite strange. I've made a note about it: https://github.com/BitShares/bitshares/issues/1142 so that I remember to take a look when I have some time.

Offline arhag

  • Hero Member
  • *****
  • Posts: 1214
    • View Profile
    • My posts on Steem
  • BitShares: arhag
  • GitHub: arhag
We have not done a thorough analysis or inventory, but there were a number of transactions (not many--definitely less than 200 in the entire blockchain) that were duplicates under the new rules yet valid under the old rules. That is to say, they had exactly the same operations and expiration time but different signatures as you said. Almost all of such transactions were quite old; there was at least one extremely recent one but it was very small and did not look like an attack. It is difficult to say whether anything else appeared as an attack without doing a full review of the offending transactions. It is not a priority, but I can take a closer look if you think that is important.

If so, does the new code allow for such possibilities in the future (using some nonce or something) or does the new client somehow construct the transactions to not ever be identical?

In practice, nothing really changes for the end-user because subsequent identical transactions they construct will likely have different expiration dates. I do not know how those previously mentioned transactions with identical operations and expiration dates were created. There is also an additional reserved field in transactions that is currently not used, but could include a nonce if this ever becomes a problem in the future.

Thank you for your responses.

I guess whether I am worried about it or not depends on how the expiration time is calculated. Can the expiration time be left unset in the transactions? Did the client at any point in time in BitShares history leave the expiration time blank? Does the client by default use some fixed offset after the time the transaction is constructed as the expiration time? Is the granularity of the expiration time 10 seconds or less?

If the old client didn't set an expiration time, then it is totally understandable why there could be multiple duplicates (in the sense that they are the same operation) that are each legitimate. But if the client always sets an expiration time and uses a fixed offset from the current time as the expiration time, I find the probability of creating a transaction with the same operations and same expiration time to be incredibly unlikely. I can understand it is not a priority since there doesn't seem to be anyone in the community claiming they had any funds stolen or unauthorized withdrawals, but it is incredibly peculiar that there were duplicates (assuming the expiration time is always set to a fixed offset of the current time) and it would be interesting to investigate further.

However, the 3 of us that put together the upgrade all still managed to completely overlook this problem due to the stress of trying to fix the malleability security issue as fast as possible. The moral of the story is--security issue or not--we need a better process for live upgrades:

Right, that is understandable. It was a good learning experience at least, and fortunately there was no permanent damage or loss of funds. Hopefully this experience will help improve the procedures to follow for testing and quality assurance of upgrades. I'm excited for DevShares.
« Last Edit: December 14, 2014, 10:13:10 pm by arhag »

Offline xeroc

  • Board Moderator
  • Hero Member
  • *****
  • Posts: 12922
  • ChainSquad GmbH
    • View Profile
    • ChainSquad GmbH
  • BitShares: xeroc
  • GitHub: xeroc
+100% to vikram ... you are THE man .. plus the rest of the team :)

We are all just human and learn by mistakes ..

Excited to see devshares soon :)

Offline vikram

If so, does the new code allow for such possibilities in the future (using some nonce or something) or does the new client somehow construct the transactions to not ever be identical?

In practice, nothing really changes for the end-user because subsequent identical transactions they construct will likely have different expiration dates. I do not know how those previously mentioned transactions with identical operations and expiration dates were created. There is also an additional reserved field in transactions that is currently not used, but could include a nonce if this ever becomes a problem in the future.

Offline vikram

We applied the change to the main network and discovered that there was one transaction that was considered a duplicate under the new system that was processed by the old system.

Can you provide anymore details about that. Was that a "legitimate" duplicate in the sense that it followed identical operations as a previous transaction sent to the blockchain but was signed with different signatures? If so, does the new code allow for such possibilities in the future (using some nonce or something) or does the new client somehow construct the transactions to not ever be identical? If it wasn't "legitimate" was it a transaction that uses a non-canonical signature or alternatively stuffed a superfluous signature to create a unique transaction ID? If so, does that mean that was an intentional attack transaction?

We have not done a thorough analysis or inventory, but there were a number of transactions (not many--definitely less than 200 in the entire blockchain) that were duplicates under the new rules yet valid under the old rules. That is to say, they had exactly the same operations and expiration time but different signatures as you said. Almost all of such transactions were quite old; there was at least one extremely recent one but it was very small and did not look like an attack. It is difficult to say whether anything else appeared as an attack without doing a full review of the offending transactions. It is not a priority, but I can take a closer look if you think that is important.

Delegates started upgrading.   Duplicate transaction checking was "disabled" until the specified fork block.  Delegates started producing blocks that included duplicate transactions and thus were rejected by the non upgraded clients.   

So v0.4.25 wasn't using the old duplicate transaction checking code until block 1249400 when it would then switch to the new one? It was simply not checking for duplicate transactions AT ALL until block 1249400? And then when people submitted a transaction it appeared once to a delegate client but then lingered around and the next delegate client included the same transaction again in their block because they didn't have the code to check it was a duplicate? Am I understanding that correctly? I think I am missing something.

This description is correct. Obviously extremely stupid to do in hindsight, and does not even match how we normally do upgrades where we always keep the old logic and then synchronously switch all nodes to the new logic. However, the 3 of us that put together the upgrade all still managed to completely overlook this problem due to the stress of trying to fix the malleability security issue as fast as possible. The moral of the story is--security issue or not--we need a better process for live upgrades:

We are reviewing our options, but the conclusion is clear.   DevShares needs to come out as soon as possible because it is the most likely way we would have caught this bug.

Offline vikram

The reason is bad software engineering practices. Lack of solid test framework. It's irresponsible and should be fixed yesterday.

Bytemaster and I agree that this issue demonstrates a glaring hole in our process for upgrading a live network. As he said:

We are reviewing our options, but the conclusion is clear.   DevShares needs to come out as soon as possible because it is the most likely way we would have caught this bug.

If you have other suggestions, I am happy to hear them.

Offline arhag

  • Hero Member
  • *****
  • Posts: 1214
    • View Profile
    • My posts on Steem
  • BitShares: arhag
  • GitHub: arhag
We applied the change to the main network and discovered that there was one transaction that was considered a duplicate under the new system that was processed by the old system.

Can you provide anymore details about that. Was that a "legitimate" duplicate in the sense that it followed identical operations as a previous transaction sent to the blockchain but was signed with different signatures? If so, does the new code allow for such possibilities in the future (using some nonce or something) or does the new client somehow construct the transactions to not ever be identical? If it wasn't "legitimate" was it a transaction that uses a non-canonical signature or alternatively stuffed a superfluous signature to create a unique transaction ID? If so, does that mean that was an intentional attack transaction?


Delegates started upgrading.   Duplicate transaction checking was "disabled" until the specified fork block.  Delegates started producing blocks that included duplicate transactions and thus were rejected by the non upgraded clients.   

So v0.4.25 wasn't using the old duplicate transaction checking code until block 1249400 when it would then switch to the new one? It was simply not checking for duplicate transactions AT ALL until block 1249400? And then when people submitted a transaction it appeared once to a delegate client but then lingered around and the next delegate client included the same transaction again in their block because they didn't have the code to check it was a duplicate? Am I understanding that correctly? I think I am missing something.

Offline bytemaster

Can I safely assume that 0.4.26 will stay around for some days to weeks now? I will be leaving to airport for a 15h flight ..

Yes, that better be the case. :)
For the latest updates checkout my blog: http://bytemaster.bitshares.org
Anything said on these forums does not constitute an intent to create a legal obligation or contract between myself and anyone else.   These are merely my opinions and I reserve the right to change them at any time.

Offline xeroc

  • Board Moderator
  • Hero Member
  • *****
  • Posts: 12922
  • ChainSquad GmbH
    • View Profile
    • ChainSquad GmbH
  • BitShares: xeroc
  • GitHub: xeroc
Can I safely assume that 0.4.26 will stay around for some days to weeks now? I will be leaving to airport for a 15h flight ..

Offline emski

  • Hero Member
  • *****
  • Posts: 1282
    • View Profile
    • http://lnkd.in/nPbhxG
@Bytemaster
I wonder if you can implement the testnet into the clients.
Essentially double all the actions it does -> once for the real network and once for the test network.
If testing is paid a lot of users/delegates would be willing to enable that feature. And BTS will have testing environment much closer to the real.

Offline emski

  • Hero Member
  • *****
  • Posts: 1282
    • View Profile
    • http://lnkd.in/nPbhxG
We are reviewing our options, but the conclusion is clear.   DevShares needs to come out as soon as possible because it is the most likely way we would have caught this bug.

That is what I wanted to see. Sorry if the questions I asked were a bit unpleasant or harsh. Bugs happen regardless how much you think over the issue(s).

Testing and release procedures might expose a lot of the bugs.

Offline bitmeat

  • Hero Member
  • *****
  • Posts: 1116
    • View Profile
The reason is bad software engineering practices. Lack of solid test framework. It's irresponsible and should be fixed yesterday.

Offline sschechter

  • Sr. Member
  • ****
  • Posts: 380
    • View Profile
Thanks for the explanation
BTSX: sschechter
PTS: PvBUyPrDRkJLVXZfvWjdudRtQgv1Fcy5Qe

Offline abit

  • Committee member
  • Hero Member
  • *
  • Posts: 4664
    • View Profile
    • Abit's Hive Blog
  • BitShares: abit
  • GitHub: abitmore
We implemented a change in how we check for duplicate transactions.  Tested it.  It worked.
We applied the change to the main network and discovered that there was one transaction that was considered a duplicate under the new system that was processed by the old system.
Vikram disabled duplicate checking until the scheduled fork block so that we could validate the old chain.
Vikram tested syncing up with the old chain and it worked. 
Vikram tested producing a block with the upgraded version and it worked.
Announce upgrade.

Delegates started upgrading.   Duplicate transaction checking was "disabled" until the specified fork block.  Delegates started producing blocks that included duplicate transactions and thus were rejected by the non upgraded clients.   

So the testing we needed was "dev share" test net because the bug was is the "handoff / upgrade" code that didn't reveal itself until after we had a chain with multiple delegates running the new code. 

In retrospect the issue could have been easily avoided with a little thinking, but we were rushing to get a security fix in.

We are reviewing our options, but the conclusion is clear.   DevShares needs to come out as soon as possible because it is the most likely way we would have caught this bug.
Ah this is the reason. So the 0.4.25 went wrong.
I was wondering why my balance got doubled  :P :P
BitShares committee member: abit
BitShares witness: in.abit

Offline bytemaster

We implemented a change in how we check for duplicate transactions.  Tested it.  It worked.
We applied the change to the main network and discovered that there was one transaction that was considered a duplicate under the new system that was processed by the old system.
Vikram disabled duplicate checking until the scheduled fork block so that we could validate the old chain.
Vikram tested syncing up with the old chain and it worked. 
Vikram tested producing a block with the upgraded version and it worked.
Announce upgrade.

Delegates started upgrading.   Duplicate transaction checking was "disabled" until the specified fork block.  Delegates started producing blocks that included duplicate transactions and thus were rejected by the non upgraded clients.   

So the testing we needed was "dev share" test net because the bug was is the "handoff / upgrade" code that didn't reveal itself until after we had a chain with multiple delegates running the new code. 

In retrospect the issue could have been easily avoided with a little thinking, but we were rushing to get a security fix in.

We are reviewing our options, but the conclusion is clear.   DevShares needs to come out as soon as possible because it is the most likely way we would have caught this bug.



For the latest updates checkout my blog: http://bytemaster.bitshares.org
Anything said on these forums does not constitute an intent to create a legal obligation or contract between myself and anyone else.   These are merely my opinions and I reserve the right to change them at any time.

Offline emski

  • Hero Member
  • *****
  • Posts: 1282
    • View Profile
    • http://lnkd.in/nPbhxG
The questions in OP are not answered.
Consider this a bump.

Offline vikram

« Last Edit: December 12, 2014, 06:44:17 pm by vikram »

Offline arhag

  • Hero Member
  • *****
  • Posts: 1214
    • View Profile
    • My posts on Steem
  • BitShares: arhag
  • GitHub: arhag
At least the VOTE presentation went well...

yeh, but these kinds of mistakes can reallllly shake confidence in the whole system.

So can stolen funds...

But yeah, upgrade procedures/protocols should improve and I am sure will be more refined over time as everyone learns from these sorts of experiences.

Offline Empirical1.1

  • Hero Member
  • *****
  • Posts: 886
    • View Profile
At least the VOTE presentation went well...

yeh, but these kinds of mistakes can reallllly shake confidence in the whole system.

 +5% Absolutely, I was being sarcastic.

Offline fuzzy

At least the VOTE presentation went well...

yeh, but these kinds of mistakes can reallllly shake confidence in the whole system. 
WhaleShares==DKP; BitShares is our Community! 
ShareBits and WhaleShares = Love :D

Offline Empirical1.1

  • Hero Member
  • *****
  • Posts: 886
    • View Profile
At least the VOTE presentation went well...

Offline emski

  • Hero Member
  • *****
  • Posts: 1282
    • View Profile
    • http://lnkd.in/nPbhxG
We have seen 0.4.24 and 0.4.25 forked before the predefined block.
Am I correct that this could be identified before release with (simple) testing ?
What procedures were followed prior 0.4.25 release ?

Main questions:
Why we have forking 0.4.24 and 0.4.25 before block 1249400 ?
Wasn't this relatively easy to prevent ?