There are two aspects to scalability:
1. How fast can a node actually process transactions / blocks
2. How well can a P2P protocol keep up with the flooding
These are two separate and independent issues. If we wish to do a proof-of-scalability test, then I would suggest we remove the P2P code from the equation (for the most part). We can have a single node be ALL witnesses and then flood the network with as many transactions as possible. It should be obvious that a "bad P2P protocol" is a different kind of bottleneck than CPU limitations of processing.
We are working on revamping the communication protocol in a MAJOR way after BTS 2.0 comes out.
Thanks for the update BM.
I think there's no rush to prove scalability while you devs are working hard on an even better network protocol for graphene.
More so considering the dynamism achieved to release 2.0 in the meanwhile with fallback or alternative network configurations, as
explained here by Dan. If flooding 101 witnesses on a single AWS node helps to find CPU or other non network related limitations, let's do it. For a test to serve as a public proof of scalability (and fud killer) I think we should isolate only external factors, and this may better fit the stage when new communication protocol is almost ready.
As someone who remembers both Novembers (just
lurking learning here till believing I had some value to input other than mining and funding)... as early adopter I see the aptitude and integrity that stands on this project, clear as water.
From that perspective I wouldn't mind to see Bitshares 2.0 relaying on a temporary non-distributed mode towards an improved network.
but as long time stakeholder who saw many other projects taking advantage from lessons learned from Bitshares long experience and transparency just to try to put it down, I wouldn't like to give a chance for more "centralization" FUD if it is worth the effort to elude that from now on.
Back to the point, I wonder how far is the old p2p protocol from being able to manage let's say ~1000 tps on 5 second blocks. Also wonder if getting there means diverting too many resources from new protocol development.
I think It would be easier to understand for the most (and harder to criticize) if the first release remains fully distributed at the expense of keeping TPS low enough to keep it stable over the old network protocol if they meet the initial requirements. It could still even break a new mark on TPS in a distributed way (and keep away Ripple comparisons to name just one possible attack).
Then the upcoming revamped communication protocol and increment on TPS would be an extra incentive, another BIG announce while already standing over on a stable platform. An announce that everyone could check and even help to design or code if you like.
I don't know if we could accomplish similar uptake going the opposite way, that being starting just with one central node, many nodes on the same server/location, or a central relay node.
If this is feasible, we could even keep going with the public load test, not the vLAN proof of scalability yet but a proof of real launchtime performance for 2.0, with a clear advice on that it will be just a fraction of what is coming,
Also if we take this path, publicly running the test that that BM suggested above about taking away the p2p protocol for scalability measurement could be of much greater impact, because having an actual network performance will help to show it as real scalability and not just theoretical.
Just my to BTS