Author Topic: Randomness of Witness Scheduling  (Read 11448 times)

0 Members and 1 Guest are viewing this topic.

Offline VoR0220

Your post in a no sequitur.  This has nothing to do with the number of witnesses. 


Sent from my iPhone using Tapatalk

I hope you were not referring to my post because I don't recall discussing the number of witnesses.
https://metaexchange.info | Bitcoin<->Altcoin exchange | Instant | Safe | Low spreads

Offline monsterer

R -> A  and R -> S are the same length and R->A was first seen by Eric then Eric will build R->A->E, but if  S&E collude, then they could cut out Alice every time by building R->S->E causing R->A to be orphaned.

By deterministically shuffling the witnesses Sam and Eric are rarely back-to-back which makes their collusion sporadic and if they wanted to only pick on Alice then having Alice->Sam->Eric or Alice->Eric->Sam would be relatively rare which ensures that Alice gets included most of the time and that if a pattern developed of Alice only missing when followed by Sam and Eric we know not to blame Alice.

By making the process random, don't you accidentally make this attack much harder to detect?
My opinions do not represent those of metaexchange unless explicitly stated.
https://metaexchange.info | Bitcoin<->Altcoin exchange | Instant | Safe | Low spreads

Offline bytemaster

They could always skip them.  By randomizing that behavior will be more easily detected because the bad witness will follow a different witness every time.

That's too low level an explanation - what is the problem you aim to solve by randomising the order? :)

Agree, Cant get my head around the issue. BM can you explain like im 5? ( or 15 at least :) )

When a witness produces a block it gets to pick which block it builds on top of.   If Eric always follows and Sam always follows Alice and Alice always follows Ruth, then Sam can cut Alice out of the consensus process by always building on top of Ruth.    This does require the collusion of the Sam and Eric because...

R -> A  and R -> S are the same length and R->A was first seen by Eric then Eric will build R->A->E, but if  S&E collude, then they could cut out Alice every time by building R->S->E causing R->A to be orphaned.

By deterministically shuffling the witnesses Sam and Eric are rarely back-to-back which makes their collusion sporadic and if they wanted to only pick on Alice then having Alice->Sam->Eric or Alice->Eric->Sam would be relatively rare which ensures that Alice gets included most of the time and that if a pattern developed of Alice only missing when followed by Sam and Eric we know not to blame Alice.

For the latest updates checkout my blog: http://bytemaster.bitshares.org
Anything said on these forums does not constitute an intent to create a legal obligation or contract between myself and anyone else.   These are merely my opinions and I reserve the right to change them at any time.

Offline lakerta06

They could always skip them.  By randomizing that behavior will be more easily detected because the bad witness will follow a different witness every time.

That's too low level an explanation - what is the problem you aim to solve by randomising the order? :)

Agree, Cant get my head around the issue. BM can you explain like im 5? ( or 15 at least :) )

Offline monsterer

They could always skip them.  By randomizing that behavior will be more easily detected because the bad witness will follow a different witness every time.

That's too low level an explanation - what is the problem you aim to solve by randomising the order? :)
My opinions do not represent those of metaexchange unless explicitly stated.
https://metaexchange.info | Bitcoin<->Altcoin exchange | Instant | Safe | Low spreads

Offline bytemaster

At the end of the day, the reason for randomizing witnesses is to prevent any one witness from "picking" on the witness that goes before them. 

Remind me what this actually means?

They could always skip them.  By randomizing that behavior will be more easily detected because the bad witness will follow a different witness every time.
For the latest updates checkout my blog: http://bytemaster.bitshares.org
Anything said on these forums does not constitute an intent to create a legal obligation or contract between myself and anyone else.   These are merely my opinions and I reserve the right to change them at any time.

Offline BTSdac

  • Hero Member
  • *****
  • Posts: 1219
    • View Profile
  • BitShares: K1
         I have something to achieve 0 delayed conformation, I wish I can  describe clearly

1. Dishonest witness would loss more than he gain.
           Delay pay witness income  , the locked payment is the trust of this witness. if there is dishonest evidence of witness fire he can confiscate his locked payment .
2.Witness could been trust lowly
           because witness has locked payment ,so witness can been trust lowly ,    the follow step would explain how to achieve 0 delayed conformation
          2.1  active_ witness sign a new type operation include  tx ( he receive from network) + index ( index in the block will been produce by him)+ the block No.  by using witness private key.  sign( tx+index +No.) , active_witness declare he will include this tx in next block by broadcasting  sign( tx+index +No.)
          and mean time other witness ( tx with small amount need next 5 witness support ,and larger amount tx need all witness support) also can sign  support information to this tx. and broadcast
          2.2 each node record this sign in local ,  sender and receiver can consider this tx is confirmed though this tx is not include in any block. if value of this tx is not big. because the witness had
declare he will include this tx in next block by his private key .
          2.3 if this witness is a honest one he must include this tx in next block with right index . if this witness is dishonest ,he does`t include this tx in block, each node can broadcast a new fire operation include sign( tx+index +No.).  this is the evidence of dishonest witness. to fire this witness and confiscate his locked payment. and next witness and others would refuse this dishonest block. because all node had record this evidence.
          2.4 if happen to this witness miss block (dishonest witness can miss block intentionally). the next witness can include the tx in block in he produce ,because he sign the support information

         
« Last Edit: August 28, 2015, 10:37:13 am by BTSdac »
github.com :pureland
BTS2.0 API :ws://139.196.37.179:8091
BTS2.0 API 数据源ws://139.196.37.179:8091

Offline monsterer

At the end of the day, the reason for randomizing witnesses is to prevent any one witness from "picking" on the witness that goes before them. 

Remind me what this actually means?
My opinions do not represent those of metaexchange unless explicitly stated.
https://metaexchange.info | Bitcoin<->Altcoin exchange | Instant | Safe | Low spreads

Offline arhag

  • Hero Member
  • *****
  • Posts: 1214
    • View Profile
    • My posts on Steem
  • BitShares: arhag
  • GitHub: arhag
Couldn't the witness just change other data in their block until they get an s value that they like?

Yes, you are correct. I don't know what I was thinking.

So we would still need a 20-byte commit in the previous round, however, not of some other 20-byte random data but rather of the r value instead. The r value would still be revealed along with the s value in the signature, but it would do double duty as the entropy source. This would save only 20 bytes per block rather than 40 bytes (so 0.6GB per year rather than 1.2GB per year).

Also, to clarify an unrelated point, anytime a witness was producing a block in a round in which they did not produce a block in the previous round, the blockchain would allow them the option to use an r value that is not consistent with the last commit of an r value that is associated with their witness, in order to avoid producing a signature that could reveal the signing key.

So, I guess it may not be worth it to try to preserve the provably fair RNG functionality of witnesses and leave that role to something else in the blockchain.

However, I did like the idea of witnesses being an entropy source for the blockchain, and I do have another idea for how we could get that "for free" with another functionality that I find very useful. If you take a partial Schnorr signature of the previous block digest by some subset of the active witnesses and combine them together, you get a Schnorr signature of the previous block digest signed by a key that is the sum of public keys associated with the block signing keys of the witnesses in that subset. The point of the nonce value of the combined signature is also random if any of the witnesses in the subset used a random nonce in their partial signature. So the block producer could collect these partial signatures from all the witnesses (they can be of the previous block or perhaps the block before the previous block to help with latency issues), combine them together, and include it in the block header. The blockchain would only accept that signature in the block header (by the way, this is in addition to the block signer's regular signature of the block they are producing) if all of the active witnesses contributed to the signature. This signature would act as a much faster checkpoint signature than waiting for 0.51*(number of active witnesses)*(block interval) seconds for the same confirmation security, at the cost of an extra signature per block. Of course, if a block producer is not able to get all the partial signatures in time they would exclude the combined signature from the block header which they are allowed to do. In that case clients simply have to fallback to waiting longer until they get the confirmation.

By using the latest point of the nonce in the combined signature as part of the random number generator, we get a RNG that has the same properties as the current RNG (in that as long as a single witness is honest, the generated number is random). However, a new random number is generated (and thus new entropy is available to the blockchain) much faster in practice (the random number generated each block is provably fair, rather than having to wait a full round to guarantee that you get new entropy that is provably fair in our current system), but in theory it may take an indefinite amount of time to actually generate a provably fair random number since each block producer could exclude the combined signature from block header (for example because there is a single active witness who is not complying with the process). Of course this functionality comes at a cost. Instead of reducing the blockchain bloat by 1.2GB per year by getting rid of the RNG, it would add 0.8GB per year for these combined signatures (we could instead get similar functionality while reducing blockchain bloat by 0.2GB per year instead of 1.2GB per year if we only allow combined signatures on every other block at the cost of slightly longer waiting to get the faster confirmation). So, it all comes down to how much the faster confirmation feature is worth it to people.

Offline Agent86

  • Sr. Member
  • ****
  • Posts: 471
  • BTSX: agent86
    • View Profile
1. removing random number generation from witnesses which will reduce block header size by 46% and saving an "empty" blockchain 1.2GB per year.

I like the random numbers built into the blocks and would love to find a way to keep them, which I think requires no witness gets to sign two blocks in a row.  I just think having a provably fair RNG built into the blockchain is a good thing.  I would consider it worth a couple of hours of compute time on a witness node per year.

We already have an entropy source from the witness signature. Why not simply require the nonce of signature to be committed in the prior round similar to how the random number is currently committed in the current scheme.

The witness signs the block and also submits the x-coordinate of the elliptic point of a new nonce (the r value) that will be used in the signature for their block in the next round. The signature of the block by the witness in the next round would only have the s value of the signature, not the r value which was already submitted in the previous round. That way no additional blockchain bloat is needed in each block header but the blockchain still has the same provably fair RNG we currently enjoy.
Couldn't the witness just change other data in their block until they get an s value that they like?

Offline arhag

  • Hero Member
  • *****
  • Posts: 1214
    • View Profile
    • My posts on Steem
  • BitShares: arhag
  • GitHub: arhag
1. removing random number generation from witnesses which will reduce block header size by 46% and saving an "empty" blockchain 1.2GB per year.

I like the random numbers built into the blocks and would love to find a way to keep them, which I think requires no witness gets to sign two blocks in a row.  I just think having a provably fair RNG built into the blockchain is a good thing.  I would consider it worth a couple of hours of compute time on a witness node per year.

We already have an entropy source from the witness signature. Why not simply require the nonce of signature to be committed in the prior round similar to how the random number is currently committed in the current scheme.

The witness signs the block and also submits the x-coordinate of the elliptic point of a new nonce (the r value) that will be used in the signature for their block in the next round. The signature of the block by the witness in the next round would only have the s value of the signature, not the r value which was already submitted in the previous round. That way no additional blockchain bloat is needed in each block header but the blockchain still has the same provably fair RNG we currently enjoy.
« Last Edit: August 27, 2015, 08:47:52 pm by arhag »

Offline betax

  • Hero Member
  • *****
  • Posts: 808
    • View Profile

What if for every A blocks the SHA256(LAST A blocks full block data) is the seed to a PRNG that is used to cycle through all witnesses in a random non-repeating order.  The last witness can influence the order of the next A blocks but only by brute forcing a ton of hashes (by slighting changing the transactions he's including i.e. a transaction back to himself) to get a couple colluding witnesses in order.  (Someone will have to run the numbers on if I have 3 colluding witnesses, how many hashes do I have to do to have a 50% chance of finding a way to get them next to each other in the upcoming block order. 

Then you just tweak your maintenance interval to happen on these superblock intervals so that the number of Active witness doesn't change mid superblock. 

Maybe that's too complicated.  That's the best I could come up.

 +5%

What about also randomly cycling the active witnesses on every superblock.
https://metaexchange.info | Bitcoin<->Altcoin exchange | Instant | Safe | Low spreads

sumantso

  • Guest
Does anyone need 1 second blocks? 10 seconds are good enough as long as it happens exactly on 10 seconds, which it does with DPoS.

I would rather they make a small blockchain which I can download and run a full client rather than being forced to rely on light clients.

Offline bytemaster

Fortunately we have already refactored that code and resolved that particular issue.

1. No witness will go within half a round of the their last turn. 
2. Every time a witness misses their turn it changes the random schedule which means you cannot do long-range attacks if there is even a few random missed blocks per day.

For the latest updates checkout my blog: http://bytemaster.bitshares.org
Anything said on these forums does not constitute an intent to create a legal obligation or contract between myself and anyone else.   These are merely my opinions and I reserve the right to change them at any time.

Offline merivercap

  • Hero Member
  • *****
  • Posts: 661
    • View Profile
    • BitCash
I have to look more deeply to learn more about RNG's, but couldn't you use blockchain transaction events & data for entropy?

You need some kind of commit / reveal scheme.  Otherwise an attacker would be able to control several bits by repurposing some field as a nonce and grinding it.  E.g. if Eve sends 0-255 satoshis to each of her 4 sock puppets, that's 2^32 possible blocks, and she can check which is favorable.  The attacker might need to control a single malicious witness if the block signature bytes are included directly or indirectly in the hash (only the witness would be able to check several signatures for hypothetical blocks).

We've been doing a lot of thinking about this, and the hard problem is that if the last participant to act in the protocol controls N bits of entropy, they can grind those bits for a favorable result.

In BTS 1.x this is mostly (but not entirely) mitigated by the commit/reveal process, the witnesses cryptographically commit to a hash on round K and then reveal the value that produces that hash on round K+1.  The last participant (actually the last N participants' actions where N=101 is the number of witnesses in a round) is locked to producing a block.  They still control 1 bit of entropy, however, since it is possible for them to drop out.

In any gambling game where the randomness is based entirely on blockchain entropy -- on any blockchain -- a dishonest block producer would be able to cheat fairly undetectably by making a bet, producing a block without broadcasting it, checking if they like the result of the game, then throwing away their block if they don't.  In essence allowing any player who controls a block producer to take the better outcome of 2 games which would likely give the player an edge over the house (very bad for the casino's financial viability).

Interesting.  Thanks for the in-depth explanation.  It gives people who are interested a good direction on how to think about this problem so hopefully someone can contribute some ideas.    Maybe someone will have an epiphany...  :)
BitCash - http://www.bitcash.org 
Beta: bitCash Wallet / p2p Gateway: (https://m.bitcash.org)
Beta: bitCash Trade (https://trade.bitcash.org)

Offline cryptosile

  • Full Member
  • ***
  • Posts: 56
    • View Profile

Given a set of active witnesses,  the witness that goes next can be calculated as:

  ACTIVE_WITNESSES[ HASH(BLOCK_TIME)%NUM_ACTIVE_WITESSES ]

This approach means witnesses cannot influence the schedule at all and that the order of witnesses will be deterministic yet random.   

In which case if I have 2 or 3 bad witnesses I can identify times in the future where I'll control 3 or 4 or 5 blocks in a row (because witnesses could be repeated) and time my attack for that pre-determined time window.  You can't really do sha256(xyz)%ACTIVE_WITNESS as that mod operation is difficult to calculate correct?


A = NUM_ACTIVE_WITESSES
What if for every A blocks the SHA256(LAST A blocks full block data) is the seed to a PRNG that is used to cycle through all witnesses in a random non-repeating order.  The last witness can influence the order of the next A blocks but only by brute forcing a ton of hashes (by slighting changing the transactions he's including i.e. a transaction back to himself) to get a couple colluding witnesses in order.  (Someone will have to run the numbers on if I have 3 colluding witnesses, how many hashes do I have to do to have a 50% chance of finding a way to get them next to each other in the upcoming block order. 

You can get maximum coverage of witness nodes, but each time you run through the witnesses it's in a unique/random order that is influenced by 100% of active nodes so the randomness can't be controlled.  The last node has some additional power here.....   You can make it so that the last node of each of these super blocks can't include any transactions but can only include a random # from 1 - 10000, or something like that.  So if the last node of a super block is a good node he will use a real random number and screw up any badness from the previous node assuming the second to last node is a bad node.  However, if the opposite is true, the last node is a bad node, he only has 10000 different combinations to pick from to try and manipulate the witness order for the next A blocks.  Additionally if the last witness can't include any transactions, then the only naturally repeating witness occurrence would be when the last and first are the same witness but only in one of those blocks would he be able to include transactions.(because of the additional restriction on being the last block) 

Then you just tweak your maintenance interval to happen on these superblock intervals so that the number of Active witness doesn't change mid superblock. 

Maybe that's too complicated.  That's the best I could come up.













Offline sittingduck

  • Sr. Member
  • ****
  • Posts: 246
    • View Profile
Your post in a no sequitur.  This has nothing to do with the number of witnesses. 


Sent from my iPhone using Tapatalk

Offline VoR0220

Yeah I have to say I don't think this is a good idea. For one, I'm beginning to think there won't be enough volume (most definitely at first) to make it worth the overhead that 1 second blocks will produce. While it is quite the technical achievement, remember, this is far more than mastercard or visa can handle combined on any given day...that's a lot of transactions going on. By all means keep the 1 second blocks. It's allowing BTS to develop an incredible niche market for high speed trading. However, with that bloat comes security issues...there are far less people running full nodes as I have expressed many times as a security flaw and that there should be a brainstormed way to incentivize non witness/delegate nodes to run full chains. Even then! I can still accept that because of the light wallet web querying. But this is really pushing the envelope here. I know you're very big on low latency environments (which I think is awesome) however I think this is really compromising security especially in a realm where not everyone is on to vote as it is, and the voting time periods have been extended, so there's even the chance that the attacker wouldn't be kicked out. I truly think that this is a flawed approach to the issue of latency as it is expecting unrealistic expectations from shareholders and from your average participant. I hope you will consider this opinion and choose not to move forward with this and find another way to reach your low latency expectations. I should also add that while it may not be your intention, it does make the witnesses look a tad greedy when it comes to their payment schedule as they're sacrificing what really makes this unique...decentralization...for more pay (I can't really blame them for wanting more pay, but I also have stake in the product, so I must make my voice heard on the issue). Thanks again.
https://metaexchange.info | Bitcoin<->Altcoin exchange | Instant | Safe | Low spreads

Offline cube

  • Hero Member
  • *****
  • Posts: 1404
  • Bit by bit, we will get there!
    • View Profile
  • BitShares: bitcube

You're asking us to improve performance at the cost of security. If the security benefit is negligible, meaning if it has never been exploited in test nets or has a low probability of being successfully exploited in practice, then go with performance but leave in the code so that through a parameter change such as a vote it can be switched back out of performance mode.

I am not sure how this would come across to an ordinary user who is going to entrust his/her money to the bts network.  The idea that they may lose their money, though it could be a remote chance, in order to get the benefit of a faster transaction from 10sec to 1sec.  Does the few-seconds improvement worth a lower security?

It is doubtful they would lose their money but Bytemaster would have to answer that.

I would never propose or support something that I thought would cause users to be at higher risk of loss than they are today.

I am glad to know that. End-users, techies or otherwise, have the assurance of security from the leader of bts himself.
ID: bitcube
bitcube is a dedicated witness and committe member. Please vote for bitcube.

Offline theoretical

I have to look more deeply to learn more about RNG's, but couldn't you use blockchain transaction events & data for entropy?

You need some kind of commit / reveal scheme.  Otherwise an attacker would be able to control several bits by repurposing some field as a nonce and grinding it.  E.g. if Eve sends 0-255 satoshis to each of her 4 sock puppets, that's 2^32 possible blocks, and she can check which is favorable.  The attacker might need to control a single malicious witness if the block signature bytes are included directly or indirectly in the hash (only the witness would be able to check several signatures for hypothetical blocks).

We've been doing a lot of thinking about this, and the hard problem is that if the last participant to act in the protocol controls N bits of entropy, they can grind those bits for a favorable result.

In BTS 1.x this is mostly (but not entirely) mitigated by the commit/reveal process, the witnesses cryptographically commit to a hash on round K and then reveal the value that produces that hash on round K+1.  The last participant (actually the last N participants' actions where N=101 is the number of witnesses in a round) is locked to producing a block.  They still control 1 bit of entropy, however, since it is possible for them to drop out.

In any gambling game where the randomness is based entirely on blockchain entropy -- on any blockchain -- a dishonest block producer would be able to cheat fairly undetectably by making a bet, producing a block without broadcasting it, checking if they like the result of the game, then throwing away their block if they don't.  In essence allowing any player who controls a block producer to take the better outcome of 2 games which would likely give the player an edge over the house (very bad for the casino's financial viability).
BTS- theoretical / PTS- PZxpdC8RqWsdU3pVJeobZY7JFKVPfNpy5z / BTC- 1NfGejohzoVGffAD1CnCRgo9vApjCU2viY / the delegate formerly known as drltc / Nothing said on these forums is intended to be legally binding / All opinions are my own unless otherwise noted / Take action due to my posts at your own risk

Offline bytemaster


You're asking us to improve performance at the cost of security. If the security benefit is negligible, meaning if it has never been exploited in test nets or has a low probability of being successfully exploited in practice, then go with performance but leave in the code so that through a parameter change such as a vote it can be switched back out of performance mode.

I am not sure how this would come across to an ordinary user who is going to entrust his/her money to the bts network.  The idea that they may lose their money, though it could be a remote chance, in order to get the benefit of a faster transaction from 10sec to 1sec.  Does the few-seconds improvement worth a lower security?

It is doubtful they would lose their money but Bytemaster would have to answer that.

I would never propose or support something that I thought would cause users to be at higher risk of loss than they are today.
For the latest updates checkout my blog: http://bytemaster.bitshares.org
Anything said on these forums does not constitute an intent to create a legal obligation or contract between myself and anyone else.   These are merely my opinions and I reserve the right to change them at any time.

Offline mike623317

  • Hero Member
  • *****
  • Posts: 637
    • View Profile
Security comes first. If revised witness node schedule does not harm any security, it sounds great.

 +5% ...and as long as a=its tested, retested, tested again and everyone agrees it WORKS

Offline luckybit

  • Hero Member
  • *****
  • Posts: 2921
    • View Profile
  • BitShares: Luckybit

You're asking us to improve performance at the cost of security. If the security benefit is negligible, meaning if it has never been exploited in test nets or has a low probability of being successfully exploited in practice, then go with performance but leave in the code so that through a parameter change such as a vote it can be switched back out of performance mode.

I am not sure how this would come across to an ordinary user who is going to entrust his/her money to the bts network.  The idea that they may lose their money, though it could be a remote chance, in order to get the benefit of a faster transaction from 10sec to 1sec.  Does the few-seconds improvement worth a lower security?

It is doubtful they would lose their money but Bytemaster would have to answer that.
https://metaexchange.info | Bitcoin<->Altcoin exchange | Instant | Safe | Low spreads

Offline cube

  • Hero Member
  • *****
  • Posts: 1404
  • Bit by bit, we will get there!
    • View Profile
  • BitShares: bitcube

You're asking us to improve performance at the cost of security. If the security benefit is negligible, meaning if it has never been exploited in test nets or has a low probability of being successfully exploited in practice, then go with performance but leave in the code so that through a parameter change such as a vote it can be switched back out of performance mode.

I am not sure how this would come across to an ordinary user who is going to entrust his/her money to the bts network.  The idea that they may lose their money, though it could be a remote chance, in order to get the benefit of a faster transaction from 10sec to 1sec.  Does the few-seconds improvement worth a lower security?
« Last Edit: August 26, 2015, 03:27:13 am by cube »
ID: bitcube
bitcube is a dedicated witness and committe member. Please vote for bitcube.

Offline luckybit

  • Hero Member
  • *****
  • Posts: 2921
    • View Profile
  • BitShares: Luckybit
I have to look more deeply to learn more about RNG's, but couldn't you use blockchain transaction events & data for entropy?

It's a nice algorithm but it needs analysis to prove it is random.
I would like to revise my statements:

Ben and I have had further discussions and have managed to achieve the following:

1. removing random number generation from witnesses which will reduce block header size by 46% and saving an "empty" blockchain 1.2GB per year.
2. keep rounds and prevent the same witness from going more than once
3. dramatically improve performance.

A separate issue of whether or not to have a witness produce N blocks in a row before rotating to the next witness is something to be considered separately.   In theory, allowing each witness to produce 2 blocks in a row would allow sub-second confirmation times without impacting security so long as we enforce the simple rule that a witness cannot miss their own block.   The benefits of sub-second confirmation time are that there would be less transactions "in-flight" at any given point in time.

You're asking us to improve performance at the cost of security. If the security benefit is negligible, meaning if it has never been exploited in test nets or has a low probability of being successfully exploited in practice, then go with performance but leave in the code so that through a parameter change such as a vote it can be switched back out of performance mode.

It may also be possible that some people have plenty of performance to sacrifice and would prefer security, so if there is a measurable difference in security based on practical rather than theoretical attack data, then I would say allow the individual to choose for themselves through a parameter switch of some sort to activate or deactivate.

If it isn't practical, and if witnesses are indeed not at a high probability of being picked on, then perhaps this is mostly theoretical, and it doesn't make sense to sacrifice performance for something theoretical.

Imagine for a second that our goal is "1 second blocks" so we have "instant" confirmation.  Imagine if we simply allowed every witness to produce 10 blocks in a row and then rotate?   This would give us instant confirmations while not reducing the security from what BitShares is today and has the benefit of having 0 latency between blocks produced in "groups" of 10.

I don't like this idea from a security POV. Right now, if I wait for 10 blocks, I have 10% of the network confirming my transaction, in this new scheme, I have 1%.

I agree.  I think that 1 second blocks with the same witness doing 10 in a row is actually just the same to me as 10 second block time.

There are two factors here:

1. every block produced is irreversible so you *KNOW* your transaction has been included and is unlikely to be reversed except potentially the last block produced by the witness if the next witness didn't see it in time. 
2. you still have to wait for 10*NUM_WITNESSES blocks to know for absolute certainty.
3. for markets the witness is committing to the order of transactions sooner rather than delaying.

So there is a significant difference in the normal situation of honest witnesses.

Good point about the market.  The witness committing to the order of trades during the 10 seconds means that you can see your trade has happened within 1 second instead of 10.


What happens if the next witness plays bad and ignores the 10 blocks by this witness?

The next witness would be ignored unless the next 2 collude to ignore those 10 blocks. 

I think viewing it from the perspective of 100,000 transactions per second, or 10,000 transactions every 0.1 seconds that having witnesses produce blocks every 0.1 seconds would dramatically improve the confirmation time to the round-trip time from the user to the witness and back which could easily be 0.5 seconds.   It would reduce the "active working set" and reduce a lot of uncertainty for high frequency traders.   

When you get into sub-second performance it is clearly impossible to not have consecutive block production due to the speed of light.  All we would be doing is apply the same principle to achieve 1 second with higher reliability in a young network.

Makes sense.

Regarding the random number generation.  Is it possible to still have witnesses generate a random number, even if it isnt used for purposes of determining who generates the next block?  There are a lot of useful applications that can be built around the blockchain generated random numbers.

This is what I mean by parameterization. If some witnesses want to generate the random numbers at the cost of performance then it should be allowed and the numbers could be useful. But the numbers have to be suitably random, so not produced by a deterministic process and not pseudo-random or they will not have much utility.

« Last Edit: August 26, 2015, 02:11:20 am by luckybit »
https://metaexchange.info | Bitcoin<->Altcoin exchange | Instant | Safe | Low spreads

Offline merivercap

  • Hero Member
  • *****
  • Posts: 661
    • View Profile
    • BitCash
I have to look more deeply to learn more about RNG's, but couldn't you use blockchain transaction events & data for entropy?
BitCash - http://www.bitcash.org 
Beta: bitCash Wallet / p2p Gateway: (https://m.bitcash.org)
Beta: bitCash Trade (https://trade.bitcash.org)

Offline Ander

  • Hero Member
  • *****
  • Posts: 3506
    • View Profile
  • BitShares: Ander
Imagine for a second that our goal is "1 second blocks" so we have "instant" confirmation.  Imagine if we simply allowed every witness to produce 10 blocks in a row and then rotate?   This would give us instant confirmations while not reducing the security from what BitShares is today and has the benefit of having 0 latency between blocks produced in "groups" of 10.

I don't like this idea from a security POV. Right now, if I wait for 10 blocks, I have 10% of the network confirming my transaction, in this new scheme, I have 1%.

I agree.  I think that 1 second blocks with the same witness doing 10 in a row is actually just the same to me as 10 second block time.

There are two factors here:

1. every block produced is irreversible so you *KNOW* your transaction has been included and is unlikely to be reversed except potentially the last block produced by the witness if the next witness didn't see it in time. 
2. you still have to wait for 10*NUM_WITNESSES blocks to know for absolute certainty.
3. for markets the witness is committing to the order of transactions sooner rather than delaying.

So there is a significant difference in the normal situation of honest witnesses.

Good point about the market.  The witness committing to the order of trades during the 10 seconds means that you can see your trade has happened within 1 second instead of 10.


What happens if the next witness plays bad and ignores the 10 blocks by this witness?

The next witness would be ignored unless the next 2 collude to ignore those 10 blocks. 

I think viewing it from the perspective of 100,000 transactions per second, or 10,000 transactions every 0.1 seconds that having witnesses produce blocks every 0.1 seconds would dramatically improve the confirmation time to the round-trip time from the user to the witness and back which could easily be 0.5 seconds.   It would reduce the "active working set" and reduce a lot of uncertainty for high frequency traders.   

When you get into sub-second performance it is clearly impossible to not have consecutive block production due to the speed of light.  All we would be doing is apply the same principle to achieve 1 second with higher reliability in a young network.

Makes sense.

Regarding the random number generation.  Is it possible to still have witnesses generate a random number, even if it isnt used for purposes of determining who generates the next block?  There are a lot of useful applications that can be built around the blockchain generated random numbers.
https://metaexchange.info | Bitcoin<->Altcoin exchange | Instant | Safe | Low spreads

Offline monsterer

There are two factors here:

1. every block produced is irreversible so you *KNOW* your transaction has been included and is unlikely to be reversed except potentially the last block produced by the witness if the next witness didn't see it in time. 
2. you still have to wait for 10*NUM_WITNESSES blocks to know for absolute certainty.
3. for markets the witness is committing to the order of transactions sooner rather than delaying.

So there is a significant difference in the normal situation of honest witnesses.

Blocks are not irreversible unless you can guarantee no forking, and you can't guarantee that in a system where the block producers can change at any time?

Where is the white paper on this?
My opinions do not represent those of metaexchange unless explicitly stated.
https://metaexchange.info | Bitcoin<->Altcoin exchange | Instant | Safe | Low spreads

Offline bytemaster

Imagine for a second that our goal is "1 second blocks" so we have "instant" confirmation.  Imagine if we simply allowed every witness to produce 10 blocks in a row and then rotate?   This would give us instant confirmations while not reducing the security from what BitShares is today and has the benefit of having 0 latency between blocks produced in "groups" of 10.

I don't like this idea from a security POV. Right now, if I wait for 10 blocks, I have 10% of the network confirming my transaction, in this new scheme, I have 1%.

I agree.  I think that 1 second blocks with the same witness doing 10 in a row is actually just the same to me as 10 second block time.

There are two factors here:

1. every block produced is irreversible so you *KNOW* your transaction has been included and is unlikely to be reversed except potentially the last block produced by the witness if the next witness didn't see it in time. 
2. you still have to wait for 10*NUM_WITNESSES blocks to know for absolute certainty.
3. for markets the witness is committing to the order of transactions sooner rather than delaying.

So there is a significant difference in the normal situation of honest witnesses.

Good point about the market.  The witness committing to the order of trades during the 10 seconds means that you can see your trade has happened within 1 second instead of 10.


What happens if the next witness plays bad and ignores the 10 blocks by this witness?

The next witness would be ignored unless the next 2 collude to ignore those 10 blocks. 

I think viewing it from the perspective of 100,000 transactions per second, or 10,000 transactions every 0.1 seconds that having witnesses produce blocks every 0.1 seconds would dramatically improve the confirmation time to the round-trip time from the user to the witness and back which could easily be 0.5 seconds.   It would reduce the "active working set" and reduce a lot of uncertainty for high frequency traders.   

When you get into sub-second performance it is clearly impossible to not have consecutive block production due to the speed of light.  All we would be doing is apply the same principle to achieve 1 second with higher reliability in a young network.
For the latest updates checkout my blog: http://bytemaster.bitshares.org
Anything said on these forums does not constitute an intent to create a legal obligation or contract between myself and anyone else.   These are merely my opinions and I reserve the right to change them at any time.

Offline Ander

  • Hero Member
  • *****
  • Posts: 3506
    • View Profile
  • BitShares: Ander
Imagine for a second that our goal is "1 second blocks" so we have "instant" confirmation.  Imagine if we simply allowed every witness to produce 10 blocks in a row and then rotate?   This would give us instant confirmations while not reducing the security from what BitShares is today and has the benefit of having 0 latency between blocks produced in "groups" of 10.

I don't like this idea from a security POV. Right now, if I wait for 10 blocks, I have 10% of the network confirming my transaction, in this new scheme, I have 1%.

I agree.  I think that 1 second blocks with the same witness doing 10 in a row is actually just the same to me as 10 second block time.

There are two factors here:

1. every block produced is irreversible so you *KNOW* your transaction has been included and is unlikely to be reversed except potentially the last block produced by the witness if the next witness didn't see it in time. 
2. you still have to wait for 10*NUM_WITNESSES blocks to know for absolute certainty.
3. for markets the witness is committing to the order of transactions sooner rather than delaying.

So there is a significant difference in the normal situation of honest witnesses.

Good point about the market.  The witness committing to the order of trades during the 10 seconds means that you can see your trade has happened within 1 second instead of 10.


What happens if the next witness plays bad and ignores the 10 blocks by this witness?
https://metaexchange.info | Bitcoin<->Altcoin exchange | Instant | Safe | Low spreads

Offline bytemaster

Imagine for a second that our goal is "1 second blocks" so we have "instant" confirmation.  Imagine if we simply allowed every witness to produce 10 blocks in a row and then rotate?   This would give us instant confirmations while not reducing the security from what BitShares is today and has the benefit of having 0 latency between blocks produced in "groups" of 10.

I don't like this idea from a security POV. Right now, if I wait for 10 blocks, I have 10% of the network confirming my transaction, in this new scheme, I have 1%.

I agree.  I think that 1 second blocks with the same witness doing 10 in a row is actually just the same to me as 10 second block time.

There are two factors here:

1. every block produced is irreversible so you *KNOW* your transaction has been included and is unlikely to be reversed except potentially the last block produced by the witness if the next witness didn't see it in time. 
2. you still have to wait for 10*NUM_WITNESSES blocks to know for absolute certainty.
3. for markets the witness is committing to the order of transactions sooner rather than delaying.

So there is a significant difference in the normal situation of honest witnesses.

For the latest updates checkout my blog: http://bytemaster.bitshares.org
Anything said on these forums does not constitute an intent to create a legal obligation or contract between myself and anyone else.   These are merely my opinions and I reserve the right to change them at any time.

Offline Ander

  • Hero Member
  • *****
  • Posts: 3506
    • View Profile
  • BitShares: Ander
Imagine for a second that our goal is "1 second blocks" so we have "instant" confirmation.  Imagine if we simply allowed every witness to produce 10 blocks in a row and then rotate?   This would give us instant confirmations while not reducing the security from what BitShares is today and has the benefit of having 0 latency between blocks produced in "groups" of 10.

I don't like this idea from a security POV. Right now, if I wait for 10 blocks, I have 10% of the network confirming my transaction, in this new scheme, I have 1%.

I agree.  I think that 1 second blocks with the same witness doing 10 in a row is actually just the same to me as 10 second block time.
https://metaexchange.info | Bitcoin<->Altcoin exchange | Instant | Safe | Low spreads

Offline monsterer

Imagine for a second that our goal is "1 second blocks" so we have "instant" confirmation.  Imagine if we simply allowed every witness to produce 10 blocks in a row and then rotate?   This would give us instant confirmations while not reducing the security from what BitShares is today and has the benefit of having 0 latency between blocks produced in "groups" of 10.

I don't like this idea from a security POV. Right now, if I wait for 10 blocks, I have 10% of the network confirming my transaction, in this new scheme, I have 1%.
My opinions do not represent those of metaexchange unless explicitly stated.
https://metaexchange.info | Bitcoin<->Altcoin exchange | Instant | Safe | Low spreads

Offline clayop

  • Hero Member
  • *****
  • Posts: 2033
    • View Profile
    • Bitshares Korea
  • BitShares: clayop
Security comes first. If revised witness node schedule does not harm any security, it sounds great.
Bitshares Korea - http://www.bitshares.kr
Vote for me and see Korean Bitshares community grows
delegate-clayop

Offline Ander

  • Hero Member
  • *****
  • Posts: 3506
    • View Profile
  • BitShares: Ander
Thre are a ton of useful application for having provably fair random number generation built into the blockchain.  I wouldnt want to see that go unless the savings were HUGE. 

Alos, having a witness make 10 blocks in a row is scary.  Essentially this means that you have to wait 10 blocks instead of 1 before you get a confirmation, because that 1 witness could be screwing you. 
https://metaexchange.info | Bitcoin<->Altcoin exchange | Instant | Safe | Low spreads

Offline Ben Mason

  • Hero Member
  • *****
  • Posts: 1070
  • Integrity & Innovation, powered by Bitshares
    • View Profile
  • BitShares: benjojo
I would like to revise my statements:

Ben and I have had further discussions and have managed to achieve the following:

1. removing random number generation from witnesses which will reduce block header size by 46% and saving an "empty" blockchain 1.2GB per year.
2. keep rounds and prevent the same witness from going more than once
3. dramatically improve performance.

A separate issue of whether or not to have a witness produce N blocks in a row before rotating to the next witness is something to be considered separately.   In theory, allowing each witness to produce 2 blocks in a row would allow sub-second confirmation times without impacting security so long as we enforce the simple rule that a witness cannot miss their own block.   The benefits of sub-second confirmation time are that there would be less transactions "in-flight" at any given point in time.

If security is not impacted then the performance benefits seem compelling enough to risk needing to provide an explanation (on how the risk is mitigated) or two.

Offline merivercap

  • Hero Member
  • *****
  • Posts: 661
    • View Profile
    • BitCash
As to the question about the benefits for the "average user" it comes down to our bottleneck being single-core CPU time and if we can eliminate a 0.0002 seconds from the mandatory per-block CPU time then we gain the potential for an extra 20 transactions per second.  In other words, this bottleneck was consuming enough CPU time to allow us to improve potential throughput by 2x bitcoin's transaction rate ;)

 :P

Yeah the updated adjustments sounds good.
BitCash - http://www.bitcash.org 
Beta: bitCash Wallet / p2p Gateway: (https://m.bitcash.org)
Beta: bitCash Trade (https://trade.bitcash.org)

Offline bytemaster

I think he means skipping the previous witnesses blocks, and making it look like they are not doing their job, thus getting them fired for missing blocks. 

I like the random numbers built into the blocks and would love to find a way to keep them, which I think requires no witness gets to sign two blocks in a row.  I just think having a provably fair RNG built into the blockchain is a good thing.  I would consider it worth a couple of hours of compute time on a witness node per year.

In the spirit of separation of powers, witnesses should not participate in random number generation.  There are many better schemes for random number generation than having the witnesses do it.  If the random numbers produced by witnesses were ever used for something that depended on "TRUE RANDOMNESS" , such as gambling, then witnesses would no longer be neutral.  The more neutral we can make witnesses the better off things will be from a legal point of view.   Plus the randomness comes at a high cost in terms of download / sync time of 1.2 GB per year.
true  +5%
I am curious what the better schemes are.
I think you're right and that we can add rng back in as a worker proposal later.  I would guess it will consume at least the same overhead when it is introduced, but perhaps we can charge fees for it.

You will love my ideas on how to do randomness for gambling ;)   But for now I care about getting BTS 2.0 right and simple.
For the latest updates checkout my blog: http://bytemaster.bitshares.org
Anything said on these forums does not constitute an intent to create a legal obligation or contract between myself and anyone else.   These are merely my opinions and I reserve the right to change them at any time.

Offline puppies

  • Hero Member
  • *****
  • Posts: 1659
    • View Profile
  • BitShares: puppies
I think he means skipping the previous witnesses blocks, and making it look like they are not doing their job, thus getting them fired for missing blocks. 

I like the random numbers built into the blocks and would love to find a way to keep them, which I think requires no witness gets to sign two blocks in a row.  I just think having a provably fair RNG built into the blockchain is a good thing.  I would consider it worth a couple of hours of compute time on a witness node per year.

In the spirit of separation of powers, witnesses should not participate in random number generation.  There are many better schemes for random number generation than having the witnesses do it.  If the random numbers produced by witnesses were ever used for something that depended on "TRUE RANDOMNESS" , such as gambling, then witnesses would no longer be neutral.  The more neutral we can make witnesses the better off things will be from a legal point of view.   Plus the randomness comes at a high cost in terms of download / sync time of 1.2 GB per year.
true  +5%
I am curious what the better schemes are.
I think you're right and that we can add rng back in as a worker proposal later.  I would guess it will consume at least the same overhead when it is introduced, but perhaps we can charge fees for it.
https://metaexchange.info | Bitcoin<->Altcoin exchange | Instant | Safe | Low spreads

Offline bytemaster

I think he means skipping the previous witnesses blocks, and making it look like they are not doing their job, thus getting them fired for missing blocks. 

I like the random numbers built into the blocks and would love to find a way to keep them, which I think requires no witness gets to sign two blocks in a row.  I just think having a provably fair RNG built into the blockchain is a good thing.  I would consider it worth a couple of hours of compute time on a witness node per year.

In the spirit of separation of powers, witnesses should not participate in random number generation.  There are many better schemes for random number generation than having the witnesses do it.  If the random numbers produced by witnesses were ever used for something that depended on "TRUE RANDOMNESS" , such as gambling, then witnesses would no longer be neutral.  The more neutral we can make witnesses the better off things will be from a legal point of view.   Plus the randomness comes at a high cost in terms of download / sync time of 1.2 GB per year.
For the latest updates checkout my blog: http://bytemaster.bitshares.org
Anything said on these forums does not constitute an intent to create a legal obligation or contract between myself and anyone else.   These are merely my opinions and I reserve the right to change them at any time.

Offline bytemaster

As to the question about the benefits for the "average user" it comes down to our bottleneck being single-core CPU time and if we can eliminate a 0.0002 seconds from the mandatory per-block CPU time then we gain the potential for an extra 20 transactions per second.  In other words, this bottleneck was consuming enough CPU time to allow us to improve potential throughput by 2x bitcoin's transaction rate ;)
For the latest updates checkout my blog: http://bytemaster.bitshares.org
Anything said on these forums does not constitute an intent to create a legal obligation or contract between myself and anyone else.   These are merely my opinions and I reserve the right to change them at any time.

Offline puppies

  • Hero Member
  • *****
  • Posts: 1659
    • View Profile
  • BitShares: puppies
I think he means skipping the previous witnesses blocks, and making it look like they are not doing their job, thus getting them fired for missing blocks. 

I like the random numbers built into the blocks and would love to find a way to keep them, which I think requires no witness gets to sign two blocks in a row.  I just think having a provably fair RNG built into the blockchain is a good thing.  I would consider it worth a couple of hours of compute time on a witness node per year.

https://metaexchange.info | Bitcoin<->Altcoin exchange | Instant | Safe | Low spreads

Offline bytemaster

I would like to revise my statements:

Ben and I have had further discussions and have managed to achieve the following:

1. removing random number generation from witnesses which will reduce block header size by 46% and saving an "empty" blockchain 1.2GB per year.
2. keep rounds and prevent the same witness from going more than once
3. dramatically improve performance.

A separate issue of whether or not to have a witness produce N blocks in a row before rotating to the next witness is something to be considered separately.   In theory, allowing each witness to produce 2 blocks in a row would allow sub-second confirmation times without impacting security so long as we enforce the simple rule that a witness cannot miss their own block.   The benefits of sub-second confirmation time are that there would be less transactions "in-flight" at any given point in time.

For the latest updates checkout my blog: http://bytemaster.bitshares.org
Anything said on these forums does not constitute an intent to create a legal obligation or contract between myself and anyone else.   These are merely my opinions and I reserve the right to change them at any time.

Offline merivercap

  • Hero Member
  • *****
  • Posts: 661
    • View Profile
    • BitCash
I think it's better to be conservative for the blockchain.

Not sure I like the idea of having multiple blocks produced by the same witness in a row.  What is meant by 'picking' on the witness that goes before them.  Do you mean 'picking' a witness to collude?  Why not  just a round-robin and occasional random reshuffle of the order after X number of blocks with the same restriction that 'no witness may produce a block less than Y blocks from their last block'.

How much improvement do you get from the old vs new design for block times?
What is the primary benefit of vastly increased reindexing performance for the average user?

thx.
BitCash - http://www.bitcash.org 
Beta: bitCash Wallet / p2p Gateway: (https://m.bitcash.org)
Beta: bitCash Trade (https://trade.bitcash.org)

Offline Thom

I would like to see the results of a Monte Carlo style simulation or similar analysis of this algo to see how block production is distributed over the pool of witnesses. It would be great to see the results as a series of histogram plots with witnesses on the X axis and number of occurrences on Y. The variables being number of active witnesses and number of iterations for the simulation.

Statisticians can probably make quick work of such an analysis but I prefer to visualize it like this.

I want to be convinced that the distribution is sufficiently random over a specific period of time (# of blocks) so no witness has an unfair advantage.

In general your approach sounds very reasonable. You might see some concerns raised from some that see the removal of the random number generation from each witness as a valuable feature they don't want to disappear, so consider being very explicit about the impact of removal and quantitative benefit to performance.
Injustice anywhere is a threat to justice everywhere - MLK |  Verbaltech2 Witness Reports: https://bitsharestalk.org/index.php/topic,23902.0.html

Offline bytemaster

We have gone to great lengths to ensure the following properties in the scheduling algorithm (assuming 101 witnesses)

1. No witness may produce a block less than 50 blocks from their last block
2. All witnesses have an opportunity to produce a block before any other witness can go again
3. Witness scheduling is based upon randomness introduced by the witnesses themselves

The result of these constraints is a witness scheduling algorithm that is limiting reindexing performance to 5000 empty blocks per second and would require 1.75 hours of computation to reindex every year just from the scheduling time alone.   This is almost as bad as BitShares today!

At the end of the day, the reason for randomizing witnesses is to prevent any one witness from "picking" on the witness that goes before them. 

I think we can relax the requirements quite a bit by:

Given a set of active witnesses,  the witness that goes next can be calculated as:

  ACTIVE_WITNESSES[ HASH(BLOCK_TIME)%NUM_ACTIVE_WITESSES ]

This approach means witnesses cannot influence the schedule at all and that the order of witnesses will be deterministic yet random.   

Under this approach some witnesses may get to go 2, 3 or even 4 times in a row, but they will never have the ability to take over the random number sequence like they could with the order being set by witness provided randomness. 

Imagine for a second that our goal is "1 second blocks" so we have "instant" confirmation.  Imagine if we simply allowed every witness to produce 10 blocks in a row and then rotate?   This would give us instant confirmations while not reducing the security from what BitShares is today and has the benefit of having 0 latency between blocks produced in "groups" of 10.    We could easily implement grouping for performance as

  ACTIVE_WITNESSES[ HASH(BLOCK_TIME/10)%NUM_ACTIVE_WITESSES ]

Under intentional grouping it means that a witness could execute a double-spend *IF* they were the attacker *AND* they isolate the victim **AND** the victim doesn't wait for 10 blocks **AND** the witness doesn't care about getting caught because they would certainly get caught.

If we are ok with INTENTIONAL grouping, then unintentional random grouping isn't really a concern either.   

A side effect of this is that we can remove the need for witnesses to provide random number generation which will reduce the size of empty blocks and also further accelerate the application of empty blocks.

Under the new algorithm we can have fast block times, and improve reindexing performance by a factor of 1000 or more.         


Thoughts?
For the latest updates checkout my blog: http://bytemaster.bitshares.org
Anything said on these forums does not constitute an intent to create a legal obligation or contract between myself and anyone else.   These are merely my opinions and I reserve the right to change them at any time.