Author Topic: Scheduling Proof of Scalability  (Read 20346 times)

0 Members and 1 Guest are viewing this topic.

Offline cube

  • Hero Member
  • *****
  • Posts: 1404
  • Bit by bit, we will get there!
    • View Profile
  • BitShares: bitcube
I like to volunteer my time to drive this but we need more technical volunteers and donations.  My guestimate is that the cost is much more than what is donated so far.

Didn't we have BitSharesBreakout for this? If I didn't mistake myself on the name. I remember it had around 1M BTS for donation purposes. Delegate got elected for that so some funds could be used for this?

Throughout the week we will probably have more people helping out and donating but if there's a lack I could double mine.

And ffs hope no one forgets to record this, would be an epic fail  :P

You're right with the scripts, but could they be writen and could everyone work with them in those 3 weeks? With all the work being done I don't know if this is achievable. People might have other priorities now... although this is important too imo

Could someone verify BitSharesBreakout could help in this?

Yes, we need to look out for load test input and capturing tool.  I guess such a (free) tool should exist and we can join in to write scripts.  Let's do some research.
ID: bitcube
bitcube is a dedicated witness and committe member. Please vote for bitcube.

Offline Akado

  • Hero Member
  • *****
  • Posts: 2752
    • View Profile
  • BitShares: akado
I like to volunteer my time to drive this but we need more technical volunteers and donations.  My guestimate is that the cost is much more than what is donated so far.

Didn't we have BitSharesBreakout for this? If I didn't mistake myself on the name. I remember it had around 1M BTS for donation purposes. Delegate got elected for that so some funds could be used for this?

Throughout the week we will probably have more people helping out and donating but if there's a lack I could double mine.

And ffs hope no one forgets to record this, would be an epic fail  :P

You're right with the scripts, but could they be writen and could everyone work with them in those 3 weeks? With all the work being done I don't know if this is achievable. People might have other priorities now... although this is important too imo
« Last Edit: September 07, 2015, 11:30:56 pm by Akado »
https://metaexchange.info | Bitcoin<->Altcoin exchange | Instant | Safe | Low spreads

Offline cube

  • Hero Member
  • *****
  • Posts: 1404
  • Bit by bit, we will get there!
    • View Profile
  • BitShares: bitcube

@nethyb - We need a bid, please.
@kenCode @cube @sudo @phillyguy @emailtooaj @onceuponatime @godzirra @rnglab @bobmaloney @ccedk @liondani @tonyk @Akado
 
Removing the WAN limitations could prove our scalability beyond 100K tps.. I'm looking forward to this AWS test.
It would be great if we could all record it on video with our phones. Split screen CLI and xeroc's tps gauge. Other ideas welcome.
 
This is 14 people so far willing to put in some BTS for this 100K tps LAN test.
Anybody else?

I am truly impressed by the community's response to call for donors and vounteers.  I will put up more detailed ideas here.

1) We will need to perform a load test in both Local Area Network and Wide Area Network (ie individual nodes distributed over the internet with different 'hops', latency and speed)

2) Local Area Network would need to be in a gigabit environment so that we can test the possibility of 100K tps.  nethyb mentioned about aws having such a network. That is cool. Depending on the load test script created, we may end up either a MS Windows or a Linux environment.  The cost may vary slightly. 

We need to determine how much load (of the 100K tps) to offload in order to find out the optimal number of computers/aws instances to pump these transactions.  We probably need to do a test to find out the max load a computer/instance can take.

I am not sure if there are performance test scripts already present in the graphene test suite (I have not taken a look yet).  If not, we will need to develop one - possibly modifying xeroc's python rpc suite.  We will still need to write scripts that generate dummy accounts and dummy transactions.  A big part of the work is right here.

We can perhaps call this phase 1 of the load testing.

3) Distributed WAN testing.  Once we can the result from phase 1, we could estimate the number of nodes needed. We will need volunteer nodes for this test.
Even though we know WAN cannot achieve 100K tps, we should get information on what and where are the bottlenecks.  We need this to convince the public that with these bottlenecks removed/improved, we can move towards 100K tps for WAN too.

We can do this test as phase 2.

4) Measuring tool for the graphene network.  A good thing is xeroc has developed such a tool.  We can use that.

5) The test scripts should be able to generate some transaction logs (and hopefully some statistic) in order to show authenticity to the public.

6) Dev team's input and support are crucial to the success

I like to volunteer my time to drive this but we need more technical volunteers and donations.  My guestimate is that the cost is much more than what is donated so far. 

Edit: I just checked the graphene repository.  I am afraid there is no such transaction test script/program available yet.  We need to look for load test input and capture tool and use it to write our input scripts.
« Last Edit: September 07, 2015, 11:43:01 pm by cube »
ID: bitcube
bitcube is a dedicated witness and committe member. Please vote for bitcube.

Offline Akado

  • Hero Member
  • *****
  • Posts: 2752
    • View Profile
  • BitShares: akado
@Tony no problem, my wording wasn't the best.

Great initiative.
Following some thoughts from previous thread  (quote below), wouldn't be better to prioritize development status (and dev schedules) before coordinating a date with volunteers?

Nice to see the pledge growing  ( : 

So you guys are talking about setting up a private network of aws machines in an attempt to push 100k tps?  I don't really see the utility, since it's not something we could currently do in the wild.  I think we would be better served if we got a large amount of people to join the actual test net and see how high we could get the tps there.

I see it more as an open demonstration of scalability. 
Sometime after the last code tweaks and dev's local stress tests may result in an auspicious, community driven final test, and also as the opening bells for Bitshares 2.0

I guess we need some dev input on that? Would really like to know their opinion on this. If/when is the best time?
https://metaexchange.info | Bitcoin<->Altcoin exchange | Instant | Safe | Low spreads

Offline phillyguy


I'm in.

1. I'm contributing with the following amount of BTS: 2500

2. I'm contributing with the following resources: N/A

3. I'm aiming for the following amount of transactions per second: N/A

https://metaexchange.info | Bitcoin<->Altcoin exchange | Instant | Safe | Low spreads

Offline rnglab

  • Full Member
  • ***
  • Posts: 171
    • View Profile
  • BitShares: rnglab
Great initiative.
Following some thoughts from previous thread  (quote below), wouldn't be better to prioritize development status (and dev schedules) before coordinating a date with volunteers?

Nice to see the pledge growing  ( : 

So you guys are talking about setting up a private network of aws machines in an attempt to push 100k tps?  I don't really see the utility, since it's not something we could currently do in the wild.  I think we would be better served if we got a large amount of people to join the actual test net and see how high we could get the tps there.

I see it more as an open demonstration of scalability. 
Sometime after the last code tweaks and dev's local stress tests may result in an auspicious, community driven final test, and also as the opening bells for Bitshares 2.0




Offline tonyk

  • Hero Member
  • *****
  • Posts: 3308
    • View Profile
1. I'm contributing with the following amount of BTS: 2500
2. I'm contributing with the following resources: N/A
3. I'm aiming for the following amount of transactions per second: 300K

ok tony that seems to be what you expect the total network to achieve so I edited the template. What I mean was how much tps can a tester do during the test. And could we even handle 300k?

I read it as "What we aim for will impact what instances need to be bought. And I believe we should aim for more than 100K tps" ?
Probably my fault.
Lack of arbitrage is the problem, isn't it. And this 'should' solves it.

Offline Akado

  • Hero Member
  • *****
  • Posts: 2752
    • View Profile
  • BitShares: akado
1. I'm contributing with the following amount of BTS: 2500
2. I'm contributing with the following resources: N/A
3. I'm aiming for the following amount of transactions per second: 300K

ok tony that seems to be what you expect the total network to achieve so I edited the template. What I mean was how much tps can a tester do during the test. And could we even handle 300k?

And we need more people to test. From the people who voted who will test? At least post on the thread please. If you won't test but are voting, you're messing up the results.
« Last Edit: September 07, 2015, 11:04:48 pm by Akado »
https://metaexchange.info | Bitcoin<->Altcoin exchange | Instant | Safe | Low spreads

Offline CLains

  • Hero Member
  • *****
  • Posts: 2606
    • View Profile
  • BitShares: clains
1. I'm contributing with the following amount of BTS: 2500
2. I'm contributing with the following resources: N/A
3. I'm aiming for the following amount of transactions per second: N/A

Ganbarimasu!

Offline tonyk

  • Hero Member
  • *****
  • Posts: 3308
    • View Profile
1. I'm contributing with the following amount of BTS: 2500
2. I'm contributing with the following resources: N/A
3. I'm aiming for the following amount of transactions per second: N/A
« Last Edit: September 07, 2015, 11:13:21 pm by tonyk »
Lack of arbitrage is the problem, isn't it. And this 'should' solves it.

Offline Akado

  • Hero Member
  • *****
  • Posts: 2752
    • View Profile
  • BitShares: akado
(old thread: bitsharestalk/index.php/topic,18299.0/all.html)
 
I for one, would be happy to contribute some resources (donate some BTS) or spin up a number of AWS instances to help with load testing, and I'm sure many others in the community would also help if someone could develop the instructions or scripts to perform a private testnet load/tps test.
Using AWS, you could effectively have a 10Gb private LAN with the ability to spin up a significant number of servers for a short period of time at reasonable cost using spot pricing.
Here is an atricle on how someone has configured AWS instances to achieve 1 Million TPS For Just $1.68/Hour - http://highscalability.com/blog/2014/8/18/1-aerospike-server-x-1-amazon-ec2-instance-1-million-tps-for.html
If we chime and say we (as the community) would be willing to fund/support/provide resources for this test - it may encourage one us to take it on...

 
@nethyb - We need a bid, please.
@kenCode @cube @sudo @phillyguy @emailtooaj @onceuponatime @godzirra @rnglab @bobmaloney @ccedk @liondani @tonyk @Akado
 
Removing the WAN limitations could prove our scalability beyond 100K tps.. I'm looking forward to this AWS test.
It would be great if we could all record it on video with our phones. Split screen CLI and xeroc's tps gauge. Other ideas welcome.
 
This is 14 people so far willing to put in some BTS for this 100K tps LAN test.
Anybody else?
Ken could you fill the template if possible, in case you have an idea? Would be interesting to know since we havent got any testers post it yet
https://metaexchange.info | Bitcoin<->Altcoin exchange | Instant | Safe | Low spreads

Offline kenCode

  • Hero Member
  • *****
  • Posts: 2283
    • View Profile
    • Agorise
(old thread: bitsharestalk/index.php/topic,18299.0/all.html)
 
I for one, would be happy to contribute some resources (donate some BTS) or spin up a number of AWS instances to help with load testing, and I'm sure many others in the community would also help if someone could develop the instructions or scripts to perform a private testnet load/tps test.
Using AWS, you could effectively have a 10Gb private LAN with the ability to spin up a significant number of servers for a short period of time at reasonable cost using spot pricing.
Here is an atricle on how someone has configured AWS instances to achieve 1 Million TPS For Just $1.68/Hour - http://highscalability.com/blog/2014/8/18/1-aerospike-server-x-1-amazon-ec2-instance-1-million-tps-for.html
If we chime and say we (as the community) would be willing to fund/support/provide resources for this test - it may encourage one us to take it on...

 
@nethyb - We need a bid, please.
@kenCode @cube @sudo @phillyguy @emailtooaj @onceuponatime @godzirra @rnglab @bobmaloney @ccedk @liondani @tonyk @Akado
 
Removing the WAN limitations could prove our scalability beyond 100K tps.. I'm looking forward to this AWS test.
It would be great if we could all record it on video with our phones. Split screen CLI and xeroc's tps gauge. Other ideas welcome.
 
This is 14 people so far willing to put in some BTS for this 100K tps LAN test.
Anybody else?
kenCode - Decentraliser @ Agorise
Matrix/Keybase/Hive/Commun/Github: @Agorise
www.PalmPay.chat

Offline liondani

  • Hero Member
  • *****
  • Posts: 3737
  • Inch by inch, play by play
    • View Profile
    • My detailed info
  • BitShares: liondani
  • GitHub: liondani
1. I'm contributing with the following amount of BTS: 2500
2. I'm contributing with the following resources: N/A
3. I'm aiming for the following amount of transactions per second: N/A

Offline onceuponatime

1. I'm contributing with the following amount of BTS: 2500
2. I'm contributing with the following resources: N/A
3. I'm aiming for the following amount of transactions per second: N/A

Offline bobmaloney

1. I'm contributing with the following amount of BTS: 2500
2. I'm contributing with the following resources: N/A
3. I'm aiming for the following amount of transactions per second:

"The crows seemed to be calling his name, thought Caw."
- Jack Handey (SNL)