Author Topic: Randomness of Witness Scheduling  (Read 11477 times)

0 Members and 1 Guest are viewing this topic.

Offline bytemaster

We have gone to great lengths to ensure the following properties in the scheduling algorithm (assuming 101 witnesses)

1. No witness may produce a block less than 50 blocks from their last block
2. All witnesses have an opportunity to produce a block before any other witness can go again
3. Witness scheduling is based upon randomness introduced by the witnesses themselves

The result of these constraints is a witness scheduling algorithm that is limiting reindexing performance to 5000 empty blocks per second and would require 1.75 hours of computation to reindex every year just from the scheduling time alone.   This is almost as bad as BitShares today!

At the end of the day, the reason for randomizing witnesses is to prevent any one witness from "picking" on the witness that goes before them. 

I think we can relax the requirements quite a bit by:

Given a set of active witnesses,  the witness that goes next can be calculated as:

  ACTIVE_WITNESSES[ HASH(BLOCK_TIME)%NUM_ACTIVE_WITESSES ]

This approach means witnesses cannot influence the schedule at all and that the order of witnesses will be deterministic yet random.   

Under this approach some witnesses may get to go 2, 3 or even 4 times in a row, but they will never have the ability to take over the random number sequence like they could with the order being set by witness provided randomness. 

Imagine for a second that our goal is "1 second blocks" so we have "instant" confirmation.  Imagine if we simply allowed every witness to produce 10 blocks in a row and then rotate?   This would give us instant confirmations while not reducing the security from what BitShares is today and has the benefit of having 0 latency between blocks produced in "groups" of 10.    We could easily implement grouping for performance as

  ACTIVE_WITNESSES[ HASH(BLOCK_TIME/10)%NUM_ACTIVE_WITESSES ]

Under intentional grouping it means that a witness could execute a double-spend *IF* they were the attacker *AND* they isolate the victim **AND** the victim doesn't wait for 10 blocks **AND** the witness doesn't care about getting caught because they would certainly get caught.

If we are ok with INTENTIONAL grouping, then unintentional random grouping isn't really a concern either.   

A side effect of this is that we can remove the need for witnesses to provide random number generation which will reduce the size of empty blocks and also further accelerate the application of empty blocks.

Under the new algorithm we can have fast block times, and improve reindexing performance by a factor of 1000 or more.         


Thoughts?
For the latest updates checkout my blog: http://bytemaster.bitshares.org
Anything said on these forums does not constitute an intent to create a legal obligation or contract between myself and anyone else.   These are merely my opinions and I reserve the right to change them at any time.