I have to look more deeply to learn more about RNG's, but couldn't you use blockchain transaction events & data for entropy?
It's a nice algorithm but it needs analysis to prove it is random.
I would like to revise my statements:
Ben and I have had further discussions and have managed to achieve the following:
1. removing random number generation from witnesses which will reduce block header size by 46% and saving an "empty" blockchain 1.2GB per year.
2. keep rounds and prevent the same witness from going more than once
3. dramatically improve performance.
A separate issue of whether or not to have a witness produce N blocks in a row before rotating to the next witness is something to be considered separately. In theory, allowing each witness to produce 2 blocks in a row would allow sub-second confirmation times without impacting security so long as we enforce the simple rule that a witness cannot miss their own block. The benefits of sub-second confirmation time are that there would be less transactions "in-flight" at any given point in time.
You're asking us to improve performance at the cost of security. If the security benefit is negligible, meaning if it has never been exploited in test nets or has a low probability of being successfully exploited in practice, then go with performance but leave in the code so that through a parameter change such as a vote it can be switched back out of performance mode.
It may also be possible that some people have plenty of performance to sacrifice and would prefer security, so if there is a measurable difference in security based on practical rather than theoretical attack data, then I would say allow the individual to choose for themselves through a parameter switch of some sort to activate or deactivate.
If it isn't practical, and if witnesses are indeed not at a high probability of being picked on, then perhaps this is mostly theoretical, and it doesn't make sense to sacrifice performance for something theoretical.
Imagine for a second that our goal is "1 second blocks" so we have "instant" confirmation. Imagine if we simply allowed every witness to produce 10 blocks in a row and then rotate? This would give us instant confirmations while not reducing the security from what BitShares is today and has the benefit of having 0 latency between blocks produced in "groups" of 10.
I don't like this idea from a security POV. Right now, if I wait for 10 blocks, I have 10% of the network confirming my transaction, in this new scheme, I have 1%.
I agree. I think that 1 second blocks with the same witness doing 10 in a row is actually just the same to me as 10 second block time.
There are two factors here:
1. every block produced is irreversible so you *KNOW* your transaction has been included and is unlikely to be reversed except potentially the last block produced by the witness if the next witness didn't see it in time.
2. you still have to wait for 10*NUM_WITNESSES blocks to know for absolute certainty.
3. for markets the witness is committing to the order of transactions sooner rather than delaying.
So there is a significant difference in the normal situation of honest witnesses.
Good point about the market. The witness committing to the order of trades during the 10 seconds means that you can see your trade has happened within 1 second instead of 10.
What happens if the next witness plays bad and ignores the 10 blocks by this witness?
The next witness would be ignored unless the next 2 collude to ignore those 10 blocks.
I think viewing it from the perspective of 100,000 transactions per second, or 10,000 transactions every 0.1 seconds that having witnesses produce blocks every 0.1 seconds would dramatically improve the confirmation time to the round-trip time from the user to the witness and back which could easily be 0.5 seconds. It would reduce the "active working set" and reduce a lot of uncertainty for high frequency traders.
When you get into sub-second performance it is clearly impossible to not have consecutive block production due to the speed of light. All we would be doing is apply the same principle to achieve 1 second with higher reliability in a young network.
Makes sense.
Regarding the random number generation. Is it possible to still have witnesses generate a random number, even if it isnt used for purposes of determining who generates the next block? There are a lot of useful applications that can be built around the blockchain generated random numbers.
This is what I mean by parameterization. If some witnesses want to generate the random numbers at the cost of performance then it should be allowed and the numbers could be useful. But the numbers have to be suitably random, so not produced by a deterministic process and not pseudo-random or they will not have much utility.