Author Topic: Scheduling Proof of Scalability  (Read 20339 times)

0 Members and 1 Guest are viewing this topic.

Offline cube

  • Hero Member
  • *****
  • Posts: 1404
  • Bit by bit, we will get there!
    • View Profile
  • BitShares: bitcube
After studying the best possible load testing tool that can work for a low budget, I decided to go for jmeter.  (See https://university.utest.com/introduction-to-load-testing-with-apache-jmeter)

The learning curve for jmeter is pretty ok but to customise it for graphene testing, I need to pick up java programming from scratch.  It took me a while to be familiarise with java, as well as the protocol of graphene for which the testing scripts have to interact.  I managed to a simple test plan for graphene with the following:

1) Creating User Account with Brain Key
2) Transfering fund

The testing infrastructure is developed.  New tests with different operations can be developed.  Below are some screenshots.  I will start a new thread to provide a simple guide on using it.

Edit: See new thread at https://bitsharestalk.org/index.php/topic,18768.msg241679.html#msg241679





« Last Edit: October 06, 2015, 10:28:38 am by cube »
ID: bitcube
bitcube is a dedicated witness and committe member. Please vote for bitcube.

Offline bytemaster

via: https://bitshares.org/technology/
"the BitShares network can confirm transactions in an average of just 1 second, limited only by the speed of light"
 
100K tps should be the minimum. if those AWS instances were top of the line servers all communicating via fiber (10Gbps-1Tbps+) then we can see what the protocol etc is truly capable of (in an ideal environment yes, but it proves what we are touting and then some).

In our recent flooding tests what I observed via profiling is that networking code was utilizing about 10x more CPU than the blockchain code.    We also have the slight problem of having to apply every transaction THREE TIMES at the moment, once upon receipt, once upon building the block, and once upon applying the finished block.   

Lastly, we assume an infrastructure for parallel signature verification that does not exist right now.   So our biggest challenge in hitting 100K in real world tests are:

1. generating that many transactions
2. validating that many signatures
3. network communication bottlenecks

I suppose we could say that graphene is like Intel advertizing that their CPUs make the internet faster while you still have a dialup modem. 
For the latest updates checkout my blog: http://bytemaster.bitshares.org
Anything said on these forums does not constitute an intent to create a legal obligation or contract between myself and anyone else.   These are merely my opinions and I reserve the right to change them at any time.

Offline kenCode

  • Hero Member
  • *****
  • Posts: 2283
    • View Profile
    • Agorise
via: https://bitshares.org/technology/
"the BitShares network can confirm transactions in an average of just 1 second, limited only by the speed of light"
 
100K tps should be the minimum. if those AWS instances were top of the line servers all communicating via fiber (10Gbps-1Tbps+) then we can see what the protocol etc is truly capable of (in an ideal environment yes, but it proves what we are touting and then some).
kenCode - Decentraliser @ Agorise
Matrix/Keybase/Hive/Commun/Github: @Agorise
www.PalmPay.chat

Offline Akado

  • Hero Member
  • *****
  • Posts: 2752
    • View Profile
  • BitShares: akado
There are two aspects to scalability:

1. How fast can a node actually process transactions / blocks
2. How well can a P2P protocol keep up with the flooding

These are two separate and independent issues.    If we wish to do a proof-of-scalability test, then I would suggest we remove the P2P code from the equation (for the most part).   We can have a single node be ALL witnesses and then flood the network with as many transactions as possible.     It should be obvious that a "bad P2P protocol" is a different kind of bottleneck than CPU limitations of processing. 

We are working on revamping the communication protocol in a MAJOR way after BTS 2.0 comes out.

Ok so we should focus on point 1. How fast can a node actually process transactions / blocks that's a start. Does everyone agree with this or have other suggestions? Need Cube or someone who understands this stuff to share some info.

I also assume we need 1 script or more that do the following:

Step one: Create dummy account
Step two: send transaction

or

Already have multiple accounts previously
Get each of those accounts to perform a transaction
This method seems better than the one above, simply by the fact that on the first method an account would need to be created, get a deposit and then perform a transaction, then do it all again. That's three operations. While if we did the first two operations before, we would only need to perform one during the test.


I thought of a cycle first but I think that doesn't make any sense unless one machine could perform several instances of that script at the same time? Or the script gets access to hundreds of accounts at the same time and performs multiple transactions at the same time? i can't think of a way right now but once again I'm just a beginner coder. Thought about doing cycles but that way would imply get access to multiple accounts at the same time and I don't see how to do that. Sometimes even having different wallets on teh same computer can mess things up a little. No idea how to do this. I'll leave this to the experts
https://metaexchange.info | Bitcoin<->Altcoin exchange | Instant | Safe | Low spreads

Offline bytemaster

For this single host witness + multiple transaction nodes testnet, can we use the proposal/vote functionality to alter some parameters of the existing protocol to support a better test environment?  If memory serves correctly, this may not be feasible due to a two week delay in an approved proposal going live by the then current set of witnesses.  If true, perhaps we need a new genesis for the proposed testnet.

    Parameters I feel should be altered:
    • Operation Fees: 0.0 BTS (The goal is spam the network, so let's not burn the fees, keep CORE flowing)
    • Maintenance Period: 24 hours (witnesses need not change, this operation is resource intensive and out of scope for this test)

Parameter updating does not take 2 weeks in the current test network.  It may be more like 2 hours, someone can probably figure it out by looking at chain props or genesis file. 

A new test net with 0 fees would be the easiest way to test this.


For the latest updates checkout my blog: http://bytemaster.bitshares.org
Anything said on these forums does not constitute an intent to create a legal obligation or contract between myself and anyone else.   These are merely my opinions and I reserve the right to change them at any time.

Offline Fox

For this single host witness + multiple transaction nodes testnet, can we use the proposal/vote functionality to alter some parameters of the existing protocol to support a better test environment?  If memory serves correctly, this may not be feasible due to a two week delay in an approved proposal going live by the then current set of witnesses.  If true, perhaps we need a new genesis for the proposed testnet.

    Parameters I feel should be altered:
    • Operation Fees: 0.0 BTS (The goal is spam the network, so let's not burn the fees, keep CORE flowing)
    • Maintenance Period: 24 hours (witnesses need not change, this operation is resource intensive and out of scope for this test)



Witness: fox

Offline tbone

  • Hero Member
  • *****
  • Posts: 632
    • View Profile
  • BitShares: tbone2
« Last Edit: September 11, 2015, 01:37:44 am by tbone »

Offline vegolino

  • Sr. Member
  • ****
  • Posts: 450
  • Reality is Information
    • View Profile
You can count me in for 5000 BTS.  :)

Offline cass

  • Hero Member
  • *****
  • Posts: 4311
  • /(┬.┬)\
    • View Profile
There are two aspects to scalability:

1. How fast can a node actually process transactions / blocks
2. How well can a P2P protocol keep up with the flooding

These are two separate and independent issues. If we wish to do a proof-of-scalability test, then I would suggest we remove the P2P code from the equation (for the most part).   We can have a single node be ALL witnesses and then flood the network with as many transactions as possible.     It should be obvious that a "bad P2P protocol" is a different kind of bottleneck than CPU limitations of processing. 

We are working on revamping the communication protocol in a MAJOR way after BTS 2.0 comes out.

Thanks for the update BM.
I think there's no rush to prove scalability while you devs are working hard on an even better network protocol for graphene.
More so considering the dynamism achieved to release 2.0 in the meanwhile with fallback or alternative network configurations, as explained here by Dan.
If flooding 101 witnesses on a single AWS node helps to find CPU or other non network related limitations, let's do it. For a test to serve as a public proof of scalability (and fud killer) I think we should isolate only external factors, and this may better fit the stage when new communication protocol is almost ready.

As someone who remembers both Novembers  (just lurking learning here till believing I had some value to input other than mining and funding)... as early adopter I see the aptitude and integrity that stands on this project, clear as water.
From that perspective I wouldn't mind to see Bitshares 2.0 relaying on a temporary non-distributed mode towards an improved network.

but as long time stakeholder who saw many other projects taking advantage from lessons learned from Bitshares long experience and transparency just to try to put it down, I wouldn't like to give a chance for more "centralization" FUD if it is worth the effort to elude that from now on.

Back to the point, I wonder how far is the old p2p protocol from being able to manage let's say ~1000 tps on 5 second blocks. Also wonder if getting there means diverting too many resources from new protocol development.

I think It would be easier to understand for the most (and harder to criticize) if the first release remains fully distributed at the expense of keeping TPS low enough to keep it stable over the old network protocol if they meet the initial requirements. It could still even break a new mark on TPS in a distributed way  (and keep away Ripple comparisons to name just one possible attack).

Then the upcoming revamped communication protocol and increment on TPS would be an extra incentive, another BIG announce while already standing over on a stable platform. An announce that everyone could check and even help to design or code if you like.

I don't know if we could accomplish similar uptake going the opposite way, that being starting just with one central node, many nodes on the same server/location, or a central relay node.

If  this is feasible, we could even keep going with the public load test, not the vLAN  proof of scalability yet but a proof of real launchtime performance for 2.0, with a clear advice on that it will be just a fraction of what is coming,

Also if we take this path, publicly running the test that that BM suggested above about taking away the p2p protocol for scalability measurement could be of much greater impact, because having an actual network  performance will help to show it as real scalability and not just theoretical.

Just my to BTS

 +5%
█║▌║║█  - - -  The quieter you become, the more you are able to hear  - - -  █║▌║║█

Offline kenCode

  • Hero Member
  • *****
  • Posts: 2283
    • View Profile
    • Agorise
@nethyb @xeroc @kenCode @cube @sudo @phillyguy @emailtooaj @onceuponatime @godzirra @rnglab @bobmaloney @ccedk @liondani @tonyk @Akado @mike623317 @CLains @DataSecurityNode @puppies @clayop @betax @abit @chryspano @Slappy @Xeldal @merockstar @tbone @Thom @Fox @aloha
 
Removing the WAN limitations could prove our scalability beyond 100K tps.. I'm looking forward to this AWS test.
It would be great if we could all record it on video with our phones. Split screen CLI and xeroc's tps gauge. Other ideas welcome.
 
This is 30 people so far willing to put in some BTS for this 100K tps LAN test.
Anybody else?
« Last Edit: September 10, 2015, 11:29:53 am by kenCode »
kenCode - Decentraliser @ Agorise
Matrix/Keybase/Hive/Commun/Github: @Agorise
www.PalmPay.chat

Offline cube

  • Hero Member
  • *****
  • Posts: 1404
  • Bit by bit, we will get there!
    • View Profile
  • BitShares: bitcube
+5% +5% +5% +5%

To puppies & cube! You have cleared my fog on what you're trying to do (well, maybe still some mist in the air concerning the poll).

I'll help any way I can.
...
Great to have you!  We are gathering momentum as more help are coming in.   :)

..
If we wish to do a proof-of-scalability test, then I would suggest we remove the P2P code from the equation (for the most part).   We can have a single node be ALL witnesses and then flood the network with as many transactions as possible.     It should be obvious that a "bad P2P protocol" is a different kind of bottleneck than CPU limitations of processing. 

We are working on revamping the communication protocol in a MAJOR way after BTS 2.0 comes out.

Thanks for the input and providing the right focus on the load test.   We should indeed do away with the p2p WAN testing since the p2p protocol will be undergoing a major upgrade after 2.0. 

It looks like we could proceed with the LAN test consisting a single node where all the witnesses are located and where the processing of transactions are done.  The 'blasting' part will be off loaded to another one or more computers/instances in the same LAN with a gigabit bandwidth.  The transactions will be sent from these computers/instances to the root node via the new Relay mechanism.  If we can do this, we can tell the world bts 2.0 can indeed process 100K tps (or better) set aside the natural speed/latency limitation of the internet.
« Last Edit: September 10, 2015, 06:21:40 am by cube »
ID: bitcube
bitcube is a dedicated witness and committe member. Please vote for bitcube.

Offline xeroc

  • Board Moderator
  • Hero Member
  • *****
  • Posts: 12922
  • ChainSquad GmbH
    • View Profile
    • ChainSquad GmbH
  • BitShares: xeroc
  • GitHub: xeroc

PS off topic:
now I realize the fees....  4 bts?(!)  (instead of 0.1 or 0.5 currently)
What if the market cap increases 10 fold for example or even more ?Is it not to much? (I assume "delegates" can change that) What about dynamic fees? A percentage like 0.2% for example?
Would it not be better so we are not in the position every now and the to change the fees?

I believe that bytemaster mentioned '20 cent fees', so I figured they would vary based on the market price.  So if the market cap went way up, it would cost less BTS. 
Is this accurate?
Fees are a parameter that can be defined by shareholders via the committe

Offline rnglab

  • Full Member
  • ***
  • Posts: 171
    • View Profile
  • BitShares: rnglab
There are two aspects to scalability:

1. How fast can a node actually process transactions / blocks
2. How well can a P2P protocol keep up with the flooding

These are two separate and independent issues. If we wish to do a proof-of-scalability test, then I would suggest we remove the P2P code from the equation (for the most part).   We can have a single node be ALL witnesses and then flood the network with as many transactions as possible.     It should be obvious that a "bad P2P protocol" is a different kind of bottleneck than CPU limitations of processing. 

We are working on revamping the communication protocol in a MAJOR way after BTS 2.0 comes out.

Thanks for the update BM.
I think there's no rush to prove scalability while you devs are working hard on an even better network protocol for graphene.
More so considering the dynamism achieved to release 2.0 in the meanwhile with fallback or alternative network configurations, as explained here by Dan.
If flooding 101 witnesses on a single AWS node helps to find CPU or other non network related limitations, let's do it. For a test to serve as a public proof of scalability (and fud killer) I think we should isolate only external factors, and this may better fit the stage when new communication protocol is almost ready.

As someone who remembers both Novembers  (just lurking learning here till believing I had some value to input other than mining and funding)... as early adopter I see the aptitude and integrity that stands on this project, clear as water.
From that perspective I wouldn't mind to see Bitshares 2.0 relaying on a temporary non-distributed mode towards an improved network.

but as long time stakeholder who saw many other projects taking advantage from lessons learned from Bitshares long experience and transparency just to try to put it down, I wouldn't like to give a chance for more "centralization" FUD if it is worth the effort to elude that from now on.

Back to the point, I wonder how far is the old p2p protocol from being able to manage let's say ~1000 tps on 5 second blocks. Also wonder if getting there means diverting too many resources from new protocol development.

I think It would be easier to understand for the most (and harder to criticize) if the first release remains fully distributed at the expense of keeping TPS low enough to keep it stable over the old network protocol if they meet the initial requirements. It could still even break a new mark on TPS in a distributed way  (and keep away Ripple comparisons to name just one possible attack).

Then the upcoming revamped communication protocol and increment on TPS would be an extra incentive, another BIG announce while already standing over on a stable platform. An announce that everyone could check and even help to design or code if you like.

I don't know if we could accomplish similar uptake going the opposite way, that being starting just with one central node, many nodes on the same server/location, or a central relay node.

If  this is feasible, we could even keep going with the public load test, not the vLAN  proof of scalability yet but a proof of real launchtime performance for 2.0, with a clear advice on that it will be just a fraction of what is coming,

Also if we take this path, publicly running the test that that BM suggested above about taking away the p2p protocol for scalability measurement could be of much greater impact, because having an actual network  performance will help to show it as real scalability and not just theoretical.

Just my to BTS

Offline Ander

  • Hero Member
  • *****
  • Posts: 3506
    • View Profile
  • BitShares: Ander

PS off topic:
now I realize the fees....  4 bts?(!)  (instead of 0.1 or 0.5 currently)
What if the market cap increases 10 fold for example or even more ?Is it not to much? (I assume "delegates" can change that) What about dynamic fees? A percentage like 0.2% for example?
Would it not be better so we are not in the position every now and the to change the fees?

I believe that bytemaster mentioned '20 cent fees', so I figured they would vary based on the market price.  So if the market cap went way up, it would cost less BTS. 
Is this accurate?
https://metaexchange.info | Bitcoin<->Altcoin exchange | Instant | Safe | Low spreads

Offline bytemaster

There are two aspects to scalability:

1. How fast can a node actually process transactions / blocks
2. How well can a P2P protocol keep up with the flooding

These are two separate and independent issues.    If we wish to do a proof-of-scalability test, then I would suggest we remove the P2P code from the equation (for the most part).   We can have a single node be ALL witnesses and then flood the network with as many transactions as possible.     It should be obvious that a "bad P2P protocol" is a different kind of bottleneck than CPU limitations of processing. 

We are working on revamping the communication protocol in a MAJOR way after BTS 2.0 comes out. 
For the latest updates checkout my blog: http://bytemaster.bitshares.org
Anything said on these forums does not constitute an intent to create a legal obligation or contract between myself and anyone else.   These are merely my opinions and I reserve the right to change them at any time.