Author Topic: Randomness of Witness Scheduling  (Read 11423 times)

0 Members and 1 Guest are viewing this topic.

Offline VoR0220

Your post in a no sequitur.  This has nothing to do with the number of witnesses. 


Sent from my iPhone using Tapatalk

I hope you were not referring to my post because I don't recall discussing the number of witnesses.
https://metaexchange.info | Bitcoin<->Altcoin exchange | Instant | Safe | Low spreads

Offline monsterer

R -> A  and R -> S are the same length and R->A was first seen by Eric then Eric will build R->A->E, but if  S&E collude, then they could cut out Alice every time by building R->S->E causing R->A to be orphaned.

By deterministically shuffling the witnesses Sam and Eric are rarely back-to-back which makes their collusion sporadic and if they wanted to only pick on Alice then having Alice->Sam->Eric or Alice->Eric->Sam would be relatively rare which ensures that Alice gets included most of the time and that if a pattern developed of Alice only missing when followed by Sam and Eric we know not to blame Alice.

By making the process random, don't you accidentally make this attack much harder to detect?
My opinions do not represent those of metaexchange unless explicitly stated.
https://metaexchange.info | Bitcoin<->Altcoin exchange | Instant | Safe | Low spreads

Offline bytemaster

They could always skip them.  By randomizing that behavior will be more easily detected because the bad witness will follow a different witness every time.

That's too low level an explanation - what is the problem you aim to solve by randomising the order? :)

Agree, Cant get my head around the issue. BM can you explain like im 5? ( or 15 at least :) )

When a witness produces a block it gets to pick which block it builds on top of.   If Eric always follows and Sam always follows Alice and Alice always follows Ruth, then Sam can cut Alice out of the consensus process by always building on top of Ruth.    This does require the collusion of the Sam and Eric because...

R -> A  and R -> S are the same length and R->A was first seen by Eric then Eric will build R->A->E, but if  S&E collude, then they could cut out Alice every time by building R->S->E causing R->A to be orphaned.

By deterministically shuffling the witnesses Sam and Eric are rarely back-to-back which makes their collusion sporadic and if they wanted to only pick on Alice then having Alice->Sam->Eric or Alice->Eric->Sam would be relatively rare which ensures that Alice gets included most of the time and that if a pattern developed of Alice only missing when followed by Sam and Eric we know not to blame Alice.

For the latest updates checkout my blog: http://bytemaster.bitshares.org
Anything said on these forums does not constitute an intent to create a legal obligation or contract between myself and anyone else.   These are merely my opinions and I reserve the right to change them at any time.

Offline lakerta06

They could always skip them.  By randomizing that behavior will be more easily detected because the bad witness will follow a different witness every time.

That's too low level an explanation - what is the problem you aim to solve by randomising the order? :)

Agree, Cant get my head around the issue. BM can you explain like im 5? ( or 15 at least :) )

Offline monsterer

They could always skip them.  By randomizing that behavior will be more easily detected because the bad witness will follow a different witness every time.

That's too low level an explanation - what is the problem you aim to solve by randomising the order? :)
My opinions do not represent those of metaexchange unless explicitly stated.
https://metaexchange.info | Bitcoin<->Altcoin exchange | Instant | Safe | Low spreads

Offline bytemaster

At the end of the day, the reason for randomizing witnesses is to prevent any one witness from "picking" on the witness that goes before them. 

Remind me what this actually means?

They could always skip them.  By randomizing that behavior will be more easily detected because the bad witness will follow a different witness every time.
For the latest updates checkout my blog: http://bytemaster.bitshares.org
Anything said on these forums does not constitute an intent to create a legal obligation or contract between myself and anyone else.   These are merely my opinions and I reserve the right to change them at any time.

Offline BTSdac

  • Hero Member
  • *****
  • Posts: 1219
    • View Profile
  • BitShares: K1
         I have something to achieve 0 delayed conformation, I wish I can  describe clearly

1. Dishonest witness would loss more than he gain.
           Delay pay witness income  , the locked payment is the trust of this witness. if there is dishonest evidence of witness fire he can confiscate his locked payment .
2.Witness could been trust lowly
           because witness has locked payment ,so witness can been trust lowly ,    the follow step would explain how to achieve 0 delayed conformation
          2.1  active_ witness sign a new type operation include  tx ( he receive from network) + index ( index in the block will been produce by him)+ the block No.  by using witness private key.  sign( tx+index +No.) , active_witness declare he will include this tx in next block by broadcasting  sign( tx+index +No.)
          and mean time other witness ( tx with small amount need next 5 witness support ,and larger amount tx need all witness support) also can sign  support information to this tx. and broadcast
          2.2 each node record this sign in local ,  sender and receiver can consider this tx is confirmed though this tx is not include in any block. if value of this tx is not big. because the witness had
declare he will include this tx in next block by his private key .
          2.3 if this witness is a honest one he must include this tx in next block with right index . if this witness is dishonest ,he does`t include this tx in block, each node can broadcast a new fire operation include sign( tx+index +No.).  this is the evidence of dishonest witness. to fire this witness and confiscate his locked payment. and next witness and others would refuse this dishonest block. because all node had record this evidence.
          2.4 if happen to this witness miss block (dishonest witness can miss block intentionally). the next witness can include the tx in block in he produce ,because he sign the support information

         
« Last Edit: August 28, 2015, 10:37:13 am by BTSdac »
github.com :pureland
BTS2.0 API :ws://139.196.37.179:8091
BTS2.0 API 数据源ws://139.196.37.179:8091

Offline monsterer

At the end of the day, the reason for randomizing witnesses is to prevent any one witness from "picking" on the witness that goes before them. 

Remind me what this actually means?
My opinions do not represent those of metaexchange unless explicitly stated.
https://metaexchange.info | Bitcoin<->Altcoin exchange | Instant | Safe | Low spreads

Offline arhag

  • Hero Member
  • *****
  • Posts: 1214
    • View Profile
    • My posts on Steem
  • BitShares: arhag
  • GitHub: arhag
Couldn't the witness just change other data in their block until they get an s value that they like?

Yes, you are correct. I don't know what I was thinking.

So we would still need a 20-byte commit in the previous round, however, not of some other 20-byte random data but rather of the r value instead. The r value would still be revealed along with the s value in the signature, but it would do double duty as the entropy source. This would save only 20 bytes per block rather than 40 bytes (so 0.6GB per year rather than 1.2GB per year).

Also, to clarify an unrelated point, anytime a witness was producing a block in a round in which they did not produce a block in the previous round, the blockchain would allow them the option to use an r value that is not consistent with the last commit of an r value that is associated with their witness, in order to avoid producing a signature that could reveal the signing key.

So, I guess it may not be worth it to try to preserve the provably fair RNG functionality of witnesses and leave that role to something else in the blockchain.

However, I did like the idea of witnesses being an entropy source for the blockchain, and I do have another idea for how we could get that "for free" with another functionality that I find very useful. If you take a partial Schnorr signature of the previous block digest by some subset of the active witnesses and combine them together, you get a Schnorr signature of the previous block digest signed by a key that is the sum of public keys associated with the block signing keys of the witnesses in that subset. The point of the nonce value of the combined signature is also random if any of the witnesses in the subset used a random nonce in their partial signature. So the block producer could collect these partial signatures from all the witnesses (they can be of the previous block or perhaps the block before the previous block to help with latency issues), combine them together, and include it in the block header. The blockchain would only accept that signature in the block header (by the way, this is in addition to the block signer's regular signature of the block they are producing) if all of the active witnesses contributed to the signature. This signature would act as a much faster checkpoint signature than waiting for 0.51*(number of active witnesses)*(block interval) seconds for the same confirmation security, at the cost of an extra signature per block. Of course, if a block producer is not able to get all the partial signatures in time they would exclude the combined signature from the block header which they are allowed to do. In that case clients simply have to fallback to waiting longer until they get the confirmation.

By using the latest point of the nonce in the combined signature as part of the random number generator, we get a RNG that has the same properties as the current RNG (in that as long as a single witness is honest, the generated number is random). However, a new random number is generated (and thus new entropy is available to the blockchain) much faster in practice (the random number generated each block is provably fair, rather than having to wait a full round to guarantee that you get new entropy that is provably fair in our current system), but in theory it may take an indefinite amount of time to actually generate a provably fair random number since each block producer could exclude the combined signature from block header (for example because there is a single active witness who is not complying with the process). Of course this functionality comes at a cost. Instead of reducing the blockchain bloat by 1.2GB per year by getting rid of the RNG, it would add 0.8GB per year for these combined signatures (we could instead get similar functionality while reducing blockchain bloat by 0.2GB per year instead of 1.2GB per year if we only allow combined signatures on every other block at the cost of slightly longer waiting to get the faster confirmation). So, it all comes down to how much the faster confirmation feature is worth it to people.

Offline Agent86

  • Sr. Member
  • ****
  • Posts: 471
  • BTSX: agent86
    • View Profile
1. removing random number generation from witnesses which will reduce block header size by 46% and saving an "empty" blockchain 1.2GB per year.

I like the random numbers built into the blocks and would love to find a way to keep them, which I think requires no witness gets to sign two blocks in a row.  I just think having a provably fair RNG built into the blockchain is a good thing.  I would consider it worth a couple of hours of compute time on a witness node per year.

We already have an entropy source from the witness signature. Why not simply require the nonce of signature to be committed in the prior round similar to how the random number is currently committed in the current scheme.

The witness signs the block and also submits the x-coordinate of the elliptic point of a new nonce (the r value) that will be used in the signature for their block in the next round. The signature of the block by the witness in the next round would only have the s value of the signature, not the r value which was already submitted in the previous round. That way no additional blockchain bloat is needed in each block header but the blockchain still has the same provably fair RNG we currently enjoy.
Couldn't the witness just change other data in their block until they get an s value that they like?

Offline arhag

  • Hero Member
  • *****
  • Posts: 1214
    • View Profile
    • My posts on Steem
  • BitShares: arhag
  • GitHub: arhag
1. removing random number generation from witnesses which will reduce block header size by 46% and saving an "empty" blockchain 1.2GB per year.

I like the random numbers built into the blocks and would love to find a way to keep them, which I think requires no witness gets to sign two blocks in a row.  I just think having a provably fair RNG built into the blockchain is a good thing.  I would consider it worth a couple of hours of compute time on a witness node per year.

We already have an entropy source from the witness signature. Why not simply require the nonce of signature to be committed in the prior round similar to how the random number is currently committed in the current scheme.

The witness signs the block and also submits the x-coordinate of the elliptic point of a new nonce (the r value) that will be used in the signature for their block in the next round. The signature of the block by the witness in the next round would only have the s value of the signature, not the r value which was already submitted in the previous round. That way no additional blockchain bloat is needed in each block header but the blockchain still has the same provably fair RNG we currently enjoy.
« Last Edit: August 27, 2015, 08:47:52 pm by arhag »

Offline betax

  • Hero Member
  • *****
  • Posts: 808
    • View Profile

What if for every A blocks the SHA256(LAST A blocks full block data) is the seed to a PRNG that is used to cycle through all witnesses in a random non-repeating order.  The last witness can influence the order of the next A blocks but only by brute forcing a ton of hashes (by slighting changing the transactions he's including i.e. a transaction back to himself) to get a couple colluding witnesses in order.  (Someone will have to run the numbers on if I have 3 colluding witnesses, how many hashes do I have to do to have a 50% chance of finding a way to get them next to each other in the upcoming block order. 

Then you just tweak your maintenance interval to happen on these superblock intervals so that the number of Active witness doesn't change mid superblock. 

Maybe that's too complicated.  That's the best I could come up.

 +5%

What about also randomly cycling the active witnesses on every superblock.
https://metaexchange.info | Bitcoin<->Altcoin exchange | Instant | Safe | Low spreads

sumantso

  • Guest
Does anyone need 1 second blocks? 10 seconds are good enough as long as it happens exactly on 10 seconds, which it does with DPoS.

I would rather they make a small blockchain which I can download and run a full client rather than being forced to rely on light clients.

Offline bytemaster

Fortunately we have already refactored that code and resolved that particular issue.

1. No witness will go within half a round of the their last turn. 
2. Every time a witness misses their turn it changes the random schedule which means you cannot do long-range attacks if there is even a few random missed blocks per day.

For the latest updates checkout my blog: http://bytemaster.bitshares.org
Anything said on these forums does not constitute an intent to create a legal obligation or contract between myself and anyone else.   These are merely my opinions and I reserve the right to change them at any time.

Offline merivercap

  • Hero Member
  • *****
  • Posts: 661
    • View Profile
    • BitCash
I have to look more deeply to learn more about RNG's, but couldn't you use blockchain transaction events & data for entropy?

You need some kind of commit / reveal scheme.  Otherwise an attacker would be able to control several bits by repurposing some field as a nonce and grinding it.  E.g. if Eve sends 0-255 satoshis to each of her 4 sock puppets, that's 2^32 possible blocks, and she can check which is favorable.  The attacker might need to control a single malicious witness if the block signature bytes are included directly or indirectly in the hash (only the witness would be able to check several signatures for hypothetical blocks).

We've been doing a lot of thinking about this, and the hard problem is that if the last participant to act in the protocol controls N bits of entropy, they can grind those bits for a favorable result.

In BTS 1.x this is mostly (but not entirely) mitigated by the commit/reveal process, the witnesses cryptographically commit to a hash on round K and then reveal the value that produces that hash on round K+1.  The last participant (actually the last N participants' actions where N=101 is the number of witnesses in a round) is locked to producing a block.  They still control 1 bit of entropy, however, since it is possible for them to drop out.

In any gambling game where the randomness is based entirely on blockchain entropy -- on any blockchain -- a dishonest block producer would be able to cheat fairly undetectably by making a bet, producing a block without broadcasting it, checking if they like the result of the game, then throwing away their block if they don't.  In essence allowing any player who controls a block producer to take the better outcome of 2 games which would likely give the player an edge over the house (very bad for the casino's financial viability).

Interesting.  Thanks for the in-depth explanation.  It gives people who are interested a good direction on how to think about this problem so hopefully someone can contribute some ideas.    Maybe someone will have an epiphany...  :)
BitCash - http://www.bitcash.org 
Beta: bitCash Wallet / p2p Gateway: (https://m.bitcash.org)
Beta: bitCash Trade (https://trade.bitcash.org)