BitShares Forum

Main => Technical Support => Topic started by: Akado on September 07, 2015, 08:51:24 pm

Title: Scheduling Proof of Scalability
Post by: Akado on September 07, 2015, 08:51:24 pm
Ok so you can vote 23 times. Don't forget to vote for the day and the hours! This will get people multiple votes to see what days they're available. After this pool closes another one will be done to know exactly at which hours will be done, to get the most amount of users do the most amount of transactions at the same time. Vote only if you're testing transactions, this is to coordinate testers only Pick a day you think you can be most of the time online so it's easier to get the timing right.

Try to plan ahead so we have time to pick the hours and have time to do this correctly. Edit: Added possible hours at the pool. Capped at 6 P.M EST because it should be around 11pm at GMT and might be too late for some people.

Pool will close in 7 days.

Although I won't be able participate as at the moment I don't have the knowledge or time to earn it, I will contribute with BTS. Other members will do this as well.

Please mention the amount of BTS you're willing to donate.
For the testers, please state how will you use the BTS in case some don't know. We want everyone to be informed.
I won't hold/receive any BTS for this, it's still to be determined to whoever they will be donated or you can just send them at whoever you want to. I'm trying to get things rolling but since I won't be testing, don't send any to me.

Just to simplify things, fill up the template: (number two is for whoever is going to use donated funds from other users. number 3 is if you have any idea at all, don't even know if it's possible)

Code: [Select]
1. I'm contributing with the following amount of BTS:
2. I'm contributing with the following resources (for testers only):
3. I'm aiming to contribute following amount of transactions per second (for testers only):

Total Amount of BTS pledged:
- akado: 2,500
- bobmaloney: 2,500
-onceuponatime: 2,500
-liondani: 2,500
-tonyk: 2,500
-CLains: 2,500
-phillyguy: 2,500
-godzirra: 2,500
-DataSecurityNode: 2,500
-emailtooaj: 5,000
- kenCode: 2,500
-nethyb: 10,000
-puppies: 2,500
-clayop: 2,500
-betax: 2,500
-abit: 2,500
-chryspano: 2,500
_______________
Total: 52,500

Total amount of transactions per second aimed for:

amount of transactions aimed for might really not be the true value we get because people might be online at roughly different times but it's just so we have an estimation. I will edit the thread to update the values.
Title: Re: Scheduling Proof of Scalability
Post by: Akado on September 07, 2015, 09:07:18 pm
1. I'm contributing with the following amount of BTS: 2,500
2. I'm contributing with the following resources: N/A
3. I'm aiming for the following amount of transactions per second: N/A
Title: Re: Scheduling Proof of Scalability
Post by: bobmaloney on September 07, 2015, 09:21:31 pm
1. I'm contributing with the following amount of BTS: 2500
2. I'm contributing with the following resources: N/A
3. I'm aiming for the following amount of transactions per second:

(http://www.netbooknews.com/wp-content/2011/09/dr-evil-1-million-dollars.jpg)
Title: Re: Scheduling Proof of Scalability
Post by: onceuponatime on September 07, 2015, 09:35:03 pm
1. I'm contributing with the following amount of BTS: 2500
2. I'm contributing with the following resources: N/A
3. I'm aiming for the following amount of transactions per second: N/A
Title: Re: Scheduling Proof of Scalability
Post by: liondani on September 07, 2015, 10:05:38 pm
1. I'm contributing with the following amount of BTS: 2500
2. I'm contributing with the following resources: N/A
3. I'm aiming for the following amount of transactions per second: N/A
Title: Re: Scheduling Proof of Scalability
Post by: kenCode on September 07, 2015, 10:24:21 pm
(old thread: bitsharestalk/index.php/topic,18299.0/all.html)
 
I for one, would be happy to contribute some resources (donate some BTS) or spin up a number of AWS instances to help with load testing, and I'm sure many others in the community would also help if someone could develop the instructions or scripts to perform a private testnet load/tps test.
Using AWS, you could effectively have a 10Gb private LAN with the ability to spin up a significant number of servers for a short period of time at reasonable cost using spot pricing.
Here is an atricle on how someone has configured AWS instances to achieve 1 Million TPS For Just $1.68/Hour - http://highscalability.com/blog/2014/8/18/1-aerospike-server-x-1-amazon-ec2-instance-1-million-tps-for.html (http://highscalability.com/blog/2014/8/18/1-aerospike-server-x-1-amazon-ec2-instance-1-million-tps-for.html)
If we chime and say we (as the community) would be willing to fund/support/provide resources for this test - it may encourage one us to take it on...

 
@nethyb - We need a bid, please.
@kenCode @cube @sudo @phillyguy @emailtooaj @onceuponatime @godzirra @rnglab @bobmaloney @ccedk @liondani @tonyk @Akado
 
Removing the WAN limitations could prove our scalability beyond 100K tps.. I'm looking forward to this AWS test.
It would be great if we could all record it on video with our phones. Split screen CLI and xeroc's tps gauge. Other ideas welcome.
 
This is 14 people so far willing to put in some BTS for this 100K tps LAN test.
Anybody else?
Title: Re: Scheduling Proof of Scalability
Post by: Akado on September 07, 2015, 10:49:43 pm
(old thread: bitsharestalk/index.php/topic,18299.0/all.html)
 
I for one, would be happy to contribute some resources (donate some BTS) or spin up a number of AWS instances to help with load testing, and I'm sure many others in the community would also help if someone could develop the instructions or scripts to perform a private testnet load/tps test.
Using AWS, you could effectively have a 10Gb private LAN with the ability to spin up a significant number of servers for a short period of time at reasonable cost using spot pricing.
Here is an atricle on how someone has configured AWS instances to achieve 1 Million TPS For Just $1.68/Hour - http://highscalability.com/blog/2014/8/18/1-aerospike-server-x-1-amazon-ec2-instance-1-million-tps-for.html (http://highscalability.com/blog/2014/8/18/1-aerospike-server-x-1-amazon-ec2-instance-1-million-tps-for.html)
If we chime and say we (as the community) would be willing to fund/support/provide resources for this test - it may encourage one us to take it on...

 
@nethyb - We need a bid, please.
@kenCode @cube @sudo @phillyguy @emailtooaj @onceuponatime @godzirra @rnglab @bobmaloney @ccedk @liondani @tonyk @Akado
 
Removing the WAN limitations could prove our scalability beyond 100K tps.. I'm looking forward to this AWS test.
It would be great if we could all record it on video with our phones. Split screen CLI and xeroc's tps gauge. Other ideas welcome.
 
This is 14 people so far willing to put in some BTS for this 100K tps LAN test.
Anybody else?
Ken could you fill the template if possible, in case you have an idea? Would be interesting to know since we havent got any testers post it yet
Title: Re: Scheduling Proof of Scalability
Post by: tonyk on September 07, 2015, 10:52:08 pm
1. I'm contributing with the following amount of BTS: 2500
2. I'm contributing with the following resources: N/A
3. I'm aiming for the following amount of transactions per second: N/A
Title: Re: Scheduling Proof of Scalability
Post by: CLains on September 07, 2015, 10:59:07 pm
1. I'm contributing with the following amount of BTS: 2500
2. I'm contributing with the following resources: N/A
3. I'm aiming for the following amount of transactions per second: N/A

Ganbarimasu!
Title: Re: Scheduling Proof of Scalability
Post by: Akado on September 07, 2015, 11:01:24 pm
1. I'm contributing with the following amount of BTS: 2500
2. I'm contributing with the following resources: N/A
3. I'm aiming for the following amount of transactions per second: 300K

ok tony that seems to be what you expect the total network to achieve so I edited the template. What I mean was how much tps can a tester do during the test. And could we even handle 300k?

And we need more people to test. From the people who voted who will test? At least post on the thread please. If you won't test but are voting, you're messing up the results.
Title: Re: Scheduling Proof of Scalability
Post by: tonyk on September 07, 2015, 11:04:41 pm
1. I'm contributing with the following amount of BTS: 2500
2. I'm contributing with the following resources: N/A
3. I'm aiming for the following amount of transactions per second: 300K

ok tony that seems to be what you expect the total network to achieve so I edited the template. What I mean was how much tps can a tester do during the test. And could we even handle 300k?

I read it as "What we aim for will impact what instances need to be bought. And I believe we should aim for more than 100K tps" ?
Probably my fault.
Title: Re: Scheduling Proof of Scalability
Post by: rnglab on September 07, 2015, 11:07:16 pm
Great initiative.
Following some thoughts from previous thread  (quote below), wouldn't be better to prioritize development status (and dev schedules) before coordinating a date with volunteers?

Nice to see the pledge growing  ( : 

So you guys are talking about setting up a private network of aws machines in an attempt to push 100k tps?  I don't really see the utility, since it's not something we could currently do in the wild.  I think we would be better served if we got a large amount of people to join the actual test net and see how high we could get the tps there.

I see it more as an open demonstration of scalability. 
Sometime after the last code tweaks and dev's local stress tests may result in an auspicious, community driven final test, and also as the opening bells for Bitshares 2.0



Title: Re: Scheduling Proof of Scalability
Post by: phillyguy on September 07, 2015, 11:11:54 pm

I'm in.

1. I'm contributing with the following amount of BTS: 2500

2. I'm contributing with the following resources: N/A

3. I'm aiming for the following amount of transactions per second: N/A

Title: Re: Scheduling Proof of Scalability
Post by: Akado on September 07, 2015, 11:12:13 pm
@Tony no problem, my wording wasn't the best.

Great initiative.
Following some thoughts from previous thread  (quote below), wouldn't be better to prioritize development status (and dev schedules) before coordinating a date with volunteers?

Nice to see the pledge growing  ( : 

So you guys are talking about setting up a private network of aws machines in an attempt to push 100k tps?  I don't really see the utility, since it's not something we could currently do in the wild.  I think we would be better served if we got a large amount of people to join the actual test net and see how high we could get the tps there.

I see it more as an open demonstration of scalability. 
Sometime after the last code tweaks and dev's local stress tests may result in an auspicious, community driven final test, and also as the opening bells for Bitshares 2.0

I guess we need some dev input on that? Would really like to know their opinion on this. If/when is the best time?
Title: Re: Scheduling Proof of Scalability
Post by: cube on September 07, 2015, 11:14:51 pm

@nethyb - We need a bid, please.
@kenCode @cube @sudo @phillyguy @emailtooaj @onceuponatime @godzirra @rnglab @bobmaloney @ccedk @liondani @tonyk @Akado
 
Removing the WAN limitations could prove our scalability beyond 100K tps.. I'm looking forward to this AWS test.
It would be great if we could all record it on video with our phones. Split screen CLI and xeroc's tps gauge. Other ideas welcome.
 
This is 14 people so far willing to put in some BTS for this 100K tps LAN test.
Anybody else?

I am truly impressed by the community's response to call for donors and vounteers.  I will put up more detailed ideas here.

1) We will need to perform a load test in both Local Area Network and Wide Area Network (ie individual nodes distributed over the internet with different 'hops', latency and speed)

2) Local Area Network would need to be in a gigabit environment so that we can test the possibility of 100K tps.  nethyb mentioned about aws having such a network. That is cool. Depending on the load test script created, we may end up either a MS Windows or a Linux environment.  The cost may vary slightly. 

We need to determine how much load (of the 100K tps) to offload in order to find out the optimal number of computers/aws instances to pump these transactions.  We probably need to do a test to find out the max load a computer/instance can take.

I am not sure if there are performance test scripts already present in the graphene test suite (I have not taken a look yet).  If not, we will need to develop one - possibly modifying xeroc's python rpc suite.  We will still need to write scripts that generate dummy accounts and dummy transactions.  A big part of the work is right here.

We can perhaps call this phase 1 of the load testing.

3) Distributed WAN testing.  Once we can the result from phase 1, we could estimate the number of nodes needed. We will need volunteer nodes for this test.
Even though we know WAN cannot achieve 100K tps, we should get information on what and where are the bottlenecks.  We need this to convince the public that with these bottlenecks removed/improved, we can move towards 100K tps for WAN too.

We can do this test as phase 2.

4) Measuring tool for the graphene network.  A good thing is xeroc has developed such a tool.  We can use that.

5) The test scripts should be able to generate some transaction logs (and hopefully some statistic) in order to show authenticity to the public.

6) Dev team's input and support are crucial to the success

I like to volunteer my time to drive this but we need more technical volunteers and donations.  My guestimate is that the cost is much more than what is donated so far. 

Edit: I just checked the graphene repository.  I am afraid there is no such transaction test script/program available yet.  We need to look for load test input and capture tool and use it to write our input scripts.
Title: Re: Scheduling Proof of Scalability
Post by: Akado on September 07, 2015, 11:28:27 pm
I like to volunteer my time to drive this but we need more technical volunteers and donations.  My guestimate is that the cost is much more than what is donated so far.

Didn't we have BitSharesBreakout for this? If I didn't mistake myself on the name. I remember it had around 1M BTS for donation purposes. Delegate got elected for that so some funds could be used for this?

Throughout the week we will probably have more people helping out and donating but if there's a lack I could double mine.

And ffs hope no one forgets to record this, would be an epic fail  :P

You're right with the scripts, but could they be writen and could everyone work with them in those 3 weeks? With all the work being done I don't know if this is achievable. People might have other priorities now... although this is important too imo
Title: Re: Scheduling Proof of Scalability
Post by: cube on September 07, 2015, 11:45:38 pm
I like to volunteer my time to drive this but we need more technical volunteers and donations.  My guestimate is that the cost is much more than what is donated so far.

Didn't we have BitSharesBreakout for this? If I didn't mistake myself on the name. I remember it had around 1M BTS for donation purposes. Delegate got elected for that so some funds could be used for this?

Throughout the week we will probably have more people helping out and donating but if there's a lack I could double mine.

And ffs hope no one forgets to record this, would be an epic fail  :P

You're right with the scripts, but could they be writen and could everyone work with them in those 3 weeks? With all the work being done I don't know if this is achievable. People might have other priorities now... although this is important too imo

Could someone verify BitSharesBreakout could help in this?

Yes, we need to look out for load test input and capturing tool.  I guess such a (free) tool should exist and we can join in to write scripts.  Let's do some research.
Title: Re: Scheduling Proof of Scalability
Post by: clayop on September 07, 2015, 11:46:04 pm
2) Local Area Network would need to be in a gigabit environment so that we can test the possibility of 100K tps.  nethyb mentioned about aws having such a network. That is cool. Depending on the load test script created, we may end up either a MS Windows or a Linux environment.  The cost may vary slightly. 

We need to determine how much load (of the 100K tps) to offload in order to find out the optimal number of computers/aws instances to pump these transactions.  We probably need to do a test to find out the max load a computer/instance can take.

I am not sure if there are performance test scripts already present in the graphene test suite (I have not taken a look yet).  If not, we will need to develop one - possibly modifying xeroc's python rpc suite.  We will still need to write scripts that generate dummy accounts and dummy transactions.  A big part of the work is right here.

Edit: I just checked the graphene repository.  I am afraid there is no such transaction test script/program available yet.  We need to look for load test input and capture tool and use it to write our input scripts.

Fully agreed with this argument.
Title: Re: Scheduling Proof of Scalability
Post by: clout on September 08, 2015, 12:05:52 am
Why do people have to donate bts? We can demonstrate scalability in a test net.
Title: Re: Scheduling Proof of Scalability
Post by: clayop on September 08, 2015, 12:20:12 am
Why do people have to donate bts? We can demonstrate scalability in a test net.

Perhaps it needs high performance VPSs, which cost at least $1 per hour.
Title: Re: Scheduling Proof of Scalability
Post by: rnglab on September 08, 2015, 12:41:45 am
Why do people have to donate bts? We can demonstrate scalability in a test net.

 A testnet made of nodes with typical Internet connections and processing power will let us know the starting point performance. Bypassing bandwidth and latency bottlenecks on a LAN testnet will bring scalability results, as Internet speed constantly grows (it likely follows Moore's law I guess) and that scalability should make it profitable for nodes to pay for better infrastructure and processing power when more TPS becomes necessary (as it means success for the DAC)

Then a testnet running on a cloud computing virtual LAN allows everyone to monitor that scalability, making results publicly verifiable.
Title: Re: Scheduling Proof of Scalability
Post by: godzirra on September 08, 2015, 01:37:25 am
1. I'm contributing with the following amount of BTS: 2500
2. I'm contributing with the following resources (for testers only): n/a
3. I'm aiming to contribute following amount of transactions per second (for testers only): n/a
Title: Re: Scheduling Proof of Scalability
Post by: BunkerChainLabs-DataSecurityNode on September 08, 2015, 04:35:27 am
1. I'm contributing with the following amount of BTS: 2500
2. I'm contributing with the following resources (for testers only): N/A
3. I'm aiming to contribute following amount of transactions per second (for testers only): N/A

Sorry I couldn't be more involved with the testing end of things at this time.

I am already maxed out between dposhub beta release and other bitshares related projects right now.

Really looking forward to the results.

 +5% to Akado for wrangling this together.
Title: Re: Scheduling Proof of Scalability
Post by: emailtooaj on September 08, 2015, 04:50:03 am
1. I'm contributing with the following amount of BTS: 5000
2. I'm contributing with the following resources (for testers only): n/a
3. I'm aiming to contribute following amount of transactions per second (for testers only): n/a

If I had the technical know how I would be all in doing the test phase.
So please, anyone willing to step up and become a tester to spread some nods around the globe and make this push happen!!   ;D
Title: Re: Scheduling Proof of Scalability
Post by: kenCode on September 08, 2015, 05:45:51 am
I for one, would be happy to contribute some resources (donate some BTS) or spin up a number of AWS instances to help with load testing, and I'm sure many others in the community would also help if someone could develop the instructions or scripts to perform a private testnet load/tps test.
Using AWS, you could effectively have a 10Gb private LAN with the ability to spin up a significant number of servers for a short period of time at reasonable cost using spot pricing.
Here is an atricle on how someone has configured AWS instances to achieve 1 Million TPS For Just $1.68/Hour - http://highscalability.com/blog/2014/8/18/1-aerospike-server-x-1-amazon-ec2-instance-1-million-tps-for.html (http://highscalability.com/blog/2014/8/18/1-aerospike-server-x-1-amazon-ec2-instance-1-million-tps-for.html)
If we chime and say we (as the community) would be willing to fund/support/provide resources for this test - it may encourage one us to take it on...

 
@nethyb @kenCode @cube @sudo @phillyguy @emailtooaj @onceuponatime @godzirra @rnglab @bobmaloney @ccedk @liondani @tonyk @Akado @mike623317 @CLains @DataSecurityNode @puppies @clayop @betax @abit @chryspano @Slappy @Xeldal @merockstar @tbone @Thom @Fox
 
Removing the WAN limitations could prove our scalability beyond 100K tps.. I'm looking forward to this AWS test.
It would be great if we could all record it on video with our phones. Split screen CLI and xeroc's tps gauge. Other ideas welcome.
 
This is 28 people so far willing to put in some BTS for this 100K tps LAN test.
Anybody else?
Title: Re: Scheduling Proof of Scalability
Post by: kenCode on September 08, 2015, 05:47:34 am
1. I'm contributing with the following amount of BTS: 2500
2. I'm contributing with the following resources (for testers only): n/a
3. I'm aiming to contribute following amount of transactions per second (for testers only): n/a
Title: Re: Scheduling Proof of Scalability
Post by: nethyb on September 08, 2015, 06:19:13 am
 +5% Great work community...

1. I'm contributing with the following amount of BTS: 10,000
2. I'm contributing with the following resources (for testers only): I'm also prepared to also contibute $300 USD in AWS server resources
3. I'm aiming to contribute following amount of transactions per second (for testers only): n/a

I don't have a lot of time to contribute, but could spin up all the AWS servers / create in image etc if someone was to the initial dev/scripting/image work etc.


i.e. The $300 in AWS could be used for 1000 x 5hrs of m3.xlarge (0.06)  spot instances, or 150 x 5hrs of m4.10xlarge , happy to support whatever combination will get the best result.

m3.xlarge
4 vCPU's
15 GB Mem
2 x 40 (SSD)

m4.10xlarge
40vCPU's
160 Gb Mem
EBS Storage
10 Gigabit
Title: Re: Scheduling Proof of Scalability
Post by: xeroc on September 08, 2015, 07:10:06 am
I could assist with scripting (when I have time) ... but I need a clear list of things you need implemented
Title: Re: Scheduling Proof of Scalability
Post by: puppies on September 08, 2015, 07:29:50 am
Using cubes description above, in regards to phase 1 testing (lan test) I am not 100% sure what has been proposed, so here is my pending pledge.  More information would be appreciated.
1. I'm contributing with the following amount of BTS: 2500
2. I'm contributing with the following resources (for testers only):I haven't played with AWS much.  If I can spin up an instance and get a gigabit connection to other instances I am more than willing to spin up an instance or two.  It will depend upon price and the time chosen.  I don't have a lot of time off during the next few weeks. 
3. I'm aiming to contribute following amount of transactions per second: TPS transmission seems to be limited by processor speed running the cli_wallet.  My i5 desktop at home seems capable of sustaining around 10-20 tps, or bursting up to about 50 tps with the flood_network command.  I would guess that each instance will be able to do about the same unless someone can find a way to optimize it.

As far as phase 2 goes.  (wan test)
1. I'm contributing with the following amount of BTS: n/a
2. I'm contributing with the following resources (for testers only): I could commit to running at least 5 servers.
3. I'm aiming to contribute following amount of transactions per second (for testers only): Should be able to sustain around 50-100 tps if I have enough CORE.

I would suggest that we complete phase 2 testing first.  If we can get 100 nodes flooding we should be able to break 1000tps.  With the windows binary available this may be possible.  We could get a set of simple directions to install and sync the client, (block production is not needed)  set up a wallet and account, and run xerocs script.  We could use mumble or irc to coordinate and hand out core. 

This should help give us the information we will need to successfully complete a proof of concept 100k tps test on aws instances.  This seems like it is going to need to be run on much more expensive hardware.  I think we have a better chance of it being successful if we have done some tests on cheaper hardware first.
Title: Re: Scheduling Proof of Scalability
Post by: clayop on September 08, 2015, 07:40:50 am
1. I'm contributing with the following amount of BTS: 2500
2. I'm contributing with the following resources (for testers only): I am not that a tech person, so I just can make spam transactions via VPS
3. I'm aiming to contribute following amount of transactions per second (for testers only): n/a
Title: Re: Scheduling Proof of Scalability
Post by: betax on September 08, 2015, 09:37:51 am

1. I'm contributing with the following amount of BTS: 2500
2. I'm contributing with the following resources (for testers only): I will be testing using Azure instance up to ~ £80. If we are going to test transaction volume, we could get AWS and Azure together. This way we could test WAN and LAN.
3. I'm aiming to contribute following amount of transactions per second (for testers only): Depends on the instance.. up until now I have been running the test on a Standard A2
https://azure.microsoft.com/en-gb/documentation/articles/virtual-machines-size-specs/.
Title: Re: Scheduling Proof of Scalability
Post by: abit on September 08, 2015, 10:22:04 am
1. I'm contributing with the following amount of BTS: 2500
2. I'm contributing with the following resources (for testers only):  N/A
3. I'm aiming to contribute following amount of transactions per second (for testers only): N/A
Title: Re: Scheduling Proof of Scalability
Post by: chryspano on September 08, 2015, 10:33:47 am
1. I'm contributing with the following amount of BTS: 2500
2. I'm contributing with the following resources (for testers only): N/A
3. I'm aiming to contribute following amount of transactions per second (for testers only): N/A
Title: Re: Scheduling Proof of Scalability
Post by: Akado on September 08, 2015, 12:18:13 pm
I could assist with scripting (when I have time) ... but I need a clear list of things you need implemented

Cube could you elaborate on this so xeroc knows what you and other testers need exactly? A script for account creation and another for transaction spam?

And could someone explain phase 1 and 2 to me as that part for me confused. So we're doing 2 tests?

Already contacted bitsharsbreakout, waiting for a reply https://bitsharestalk.org/index.php/topic,13500.msg234844.html#msg234844
Title: Re: Scheduling Proof of Scalability
Post by: Slappy on September 08, 2015, 01:42:39 pm
I'm not very technical so all I can offer is the bts. Mark me down for 2500. Look forward to seeing the results.
Title: Re: Scheduling Proof of Scalability
Post by: Xeldal on September 08, 2015, 02:12:36 pm
I thought Dan said they already did a LAN test, and achieved something like 186k tps.  Is there a reason we're trying to do this again? other than capturing it on video.  I would think they would have some scripts for this already also.

the WAN test might be more interesting. 

It sounds like there are still a great deal of optimizations that can be implemented.  Is now the best time to spend money on powerful/connected servers, only to do it again later with better optimizations?

Why can we not just use the existing test network in a organized fashion to capture this data. It may not produce the ultimate upper bounds but should still reveal bottlenecks and have something meaningful for video or whatever media angle you're interested in.    Im not fully convinced is best to spend a bunch of money at this time. 

I'm willing to donate some personal funds to aid in this:
1. I'm contributing with the following amount of BTS: 2500
2.  N/A
3.  N/A     

but for any significant amount from BitsharesBreakout, at this time I'd like to hold off for a moment to get a better feel for the value derived and necessity for doing this now, and in the way described.  Get some feed back from devs etc as to whether this is valuable for their work.
Title: Re: Scheduling Proof of Scalability
Post by: xeroc on September 08, 2015, 02:26:27 pm
I thought Dan said they already did a LAN test, and achieved something like 186k tps.  Is there a reason we're trying to do this again? other than capturing it on video.  I would think they would have some scripts for this already also.
They have not had ECDSA signature verifications enabled .. just the plain blockchain replay ..

Quote
It sounds like there are still a great deal of optimizations that can be implemented.  Is now the best time to spend money on powerful/connected servers, only to do it again later with better optimizations?
It's supposed to be a show case for the AS-IS state
Title: Re: Scheduling Proof of Scalability
Post by: betax on September 08, 2015, 02:30:09 pm
I thought Dan said they already did a LAN test, and achieved something like 186k tps.  Is there a reason we're trying to do this again? other than capturing it on video.  I would think they would have some scripts for this already also.

the WAN test might be more interesting. 

It sounds like there are still a great deal of optimizations that can be implemented.  Is now the best time to spend money on powerful/connected servers, only to do it again later with better optimizations?

Why can we not just use the existing test network in a organized fashion to capture this data. It may not produce the ultimate upper bounds but should still reveal bottlenecks and have something meaningful for video or whatever media angle you're interested in.    Im not fully convinced is best to spend a bunch of money at this time. 

I'm willing to donate some personal funds to aid in this:
1. I'm contributing with the following amount of BTS: 2500
2.  N/A
3.  N/A     

but for any significant amount from BitsharesBreakout, at this time I'd like to hold off for a moment to get a better feel for the value derived and necessity for doing this now, and in the way described.  Get some feed back from devs etc as to whether this is valuable for their work.

I have to agree with this, we are testing for testing. Why focus in an scenario that we can build towards and not focus on the current world. Our testnet is a good way to prove what we can do now, with the current vps / servers we currently need.

Probably best is to organise the testnet in a better way. If we are using Azure / AWS we can ensure we are all in the same region and using better connectivity. If you are using a home server you are still good to go.

We need to ensure that we have enough funds to test :). Most of us have used puppies /clayop autohotkey script  or xerocs (I used xerocs as I was using putty) and quickly ran out. If xerocs script is modified so we distribute funds across all named participants, we might not ran out of CORE. Also ensure that we can quickly restart our witness if down.

Maybe scripts should run a counter to verify all the transactions are sent / received.

Once all this is organised, then we can experiment on scaling vertically and horizontally. More transactions, more clients / users to count votes, bigger vms.
Title: Re: Scheduling Proof of Scalability
Post by: betax on September 08, 2015, 02:34:12 pm
To summarise:

It needs a test plan (what are we testing, how are we testing it, what is what we want to achieved or why are we testing), environment/s setup, scripts to execute tests, and of course who is going to participate.

Once this is done, we can think what money we need to achieve this.

Title: Re: Scheduling Proof of Scalability
Post by: unreadPostsSinceLastVisit on September 08, 2015, 02:37:33 pm
1. I'm contributing with the following amount of BTS: 2500
2. I'm contributing with the following resources (for testers only): N/A
3. I'm aiming to contribute following amount of transactions per second (for testers only): N/A
Title: Re: Scheduling Proof of Scalability
Post by: Akado on September 08, 2015, 02:53:09 pm
I thought Dan said they already did a LAN test, and achieved something like 186k tps.  Is there a reason we're trying to do this again? other than capturing it on video.  I would think they would have some scripts for this already also.

the WAN test might be more interesting. 

It sounds like there are still a great deal of optimizations that can be implemented.  Is now the best time to spend money on powerful/connected servers, only to do it again later with better optimizations?

Why can we not just use the existing test network in a organized fashion to capture this data. It may not produce the ultimate upper bounds but should still reveal bottlenecks and have something meaningful for video or whatever media angle you're interested in.    Im not fully convinced is best to spend a bunch of money at this time. 

I'm willing to donate some personal funds to aid in this:
1. I'm contributing with the following amount of BTS: 2500
2.  N/A
3.  N/A     

but for any significant amount from BitsharesBreakout, at this time I'd like to hold off for a moment to get a better feel for the value derived and necessity for doing this now, and in the way described.  Get some feed back from devs etc as to whether this is valuable for their work.

Yes WAN would be better.
I'm waiting on devs input on this as well, this was merely a suggestion.
It doesn't necessarily need to be done now, just trying to organize stuff, the mentioned dates are up to 3 weeks from now so we should have some time, offical should shouldn't be much later than that (maybe 2 weeks).
The objective would be to replicate a real case scenario as good as possible, from what I've read, via LAN and with no signature verification it might have had better results but in a not so specific case, it might take longer, plus it would be a statement towards all other crypto and give us plenty of attention, not to mention to shut the mouths of many flamers out there saying this is fraud.

We only benefit from this, but obviously needs to be well coordinated and details need to be worked out but since I'm not a technical guy, that is beyond my skill. This thread was created with the exact purpose of arranging that and coming up with something we all agree with.

It will have to be done eventually, so why not discuss? It's not like I'm saying it needs to be done now, but since it will eventually be done, let's at least try to come at a consensus and start to work things out so we're ready when the time comes. We only benefit from this.

To summarise:

It needs a test plan (what are we testing, how are we testing it, what is what we want to achieved or why are we testing), environment/s setup, scripts to execute tests, and of course who is going to participate.

Once this is done, we can think what money we need to achieve this.

 +5%
Title: Re: Scheduling Proof of Scalability
Post by: tbone on September 08, 2015, 03:01:49 pm
I have no problem contributing to something useful funded directly by the community.  But I have to agree with Xeldal.  What are we trying to accomplish here exactly?  Do we have input from Dan and the devs?  Have we considered that this may backfire on us?  Is it really a good use of funds and a good risk vs. reward?  Personally, I think it makes more sense to start figuring out what we'd like to fund via worker proposals since 2.0 will be a reality soon.  For example, I think 2FA is absolutely critical to implement as soon as possible.

Edit: After reading Akado's latest post I would be more inclined to contribute 5000 BTS to this if done in coordination with the dev team.
Title: Re: Scheduling Proof of Scalability
Post by: puppies on September 08, 2015, 03:18:02 pm
I thought Dan said they already did a LAN test, and achieved something like 186k tps.  Is there a reason we're trying to do this again? other than capturing it on video.  I would think they would have some scripts for this already also.

the WAN test might be more interesting. 

It sounds like there are still a great deal of optimizations that can be implemented.  Is now the best time to spend money on powerful/connected servers, only to do it again later with better optimizations?

Why can we not just use the existing test network in a organized fashion to capture this data. It may not produce the ultimate upper bounds but should still reveal bottlenecks and have something meaningful for video or whatever media angle you're interested in.    Im not fully convinced is best to spend a bunch of money at this time. 

I'm willing to donate some personal funds to aid in this:
1. I'm contributing with the following amount of BTS: 2500
2.  N/A
3.  N/A     

but for any significant amount from BitsharesBreakout, at this time I'd like to hold off for a moment to get a better feel for the value derived and necessity for doing this now, and in the way described.  Get some feed back from devs etc as to whether this is valuable for their work.

I have to agree with this, we are testing for testing. Why focus in an scenario that we can build towards and not focus on the current world. Our testnet is a good way to prove what we can do now, with the current vps / servers we currently need.

Probably best is to organise the testnet in a better way. If we are using Azure / AWS we can ensure we are all in the same region and using better connectivity. If you are using a home server you are still good to go.

We need to ensure that we have enough funds to test :). Most of us have used puppies /clayop autohotkey script  or xerocs (I used xerocs as I was using putty) and quickly ran out. If xerocs script is modified so we distribute funds across all named participants, we might not ran out of CORE. Also ensure that we can quickly restart our witness if down.

Maybe scripts should run a counter to verify all the transactions are sent / received.

Once all this is organised, then we can experiment on scaling vertically and horizontally. More transactions, more clients / users to count votes, bigger vms.

We can now withdraw the extra fees collected from lifetime members, so we can reduce the effective fees down to 4 bts per transaction.  https://bitsharestalk.org/index.php/topic,17962.msg234909.html#msg234909 (https://bitsharestalk.org/index.php/topic,17962.msg234909.html#msg234909)  Adding a balance check, vesting balance check, withdraw funds to the script will allow us to extend testing/test with fewer CORE. 

If we had simple instructions on how to set up a node, and begin flooding with a python script how many of the "non technical" users would be willing to participate?
Title: Re: Scheduling Proof of Scalability
Post by: Thom on September 08, 2015, 03:52:49 pm
To summarise:

It needs a test plan (what are we testing, how are we testing it, what is what we want to achieved or why are we testing), environment/s setup, scripts to execute tests, and of course who is going to participate.

Once this is done, we can think what money we need to achieve this.

Sorry guys, trying to catch up after the weekend and personal events of last week. Been reading this thread but don't have a clear picture of what is really going on here.

An answer to the above post would help clear my fog. I'll be staying tuned.

1. I'm contributing with the following amount of BTS: 5000
2. I'm contributing with the following resources (for testers only):  Several VPS instances around the world, if useful for testing
3. I'm aiming to contribute following amount of transactions per second (for testers only): ??? Not sure how to answer this
Title: Re: Scheduling Proof of Scalability
Post by: Fox on September 08, 2015, 04:56:43 pm
1. I'm contributing with the following amount of BTS: 5000
2. I'm contributing with the following resources (for testers only):  up to 30 x 2 Core VMs 
3. I'm aiming to contribute following amount of transactions per second (for testers only): 500TPS (in short coordinated bursts)
4. I'm willing to contribute scripts for setting up nodes in Azure.
Title: Re: Scheduling Proof of Scalability
Post by: puppies on September 08, 2015, 05:30:22 pm
To summarise:

It needs a test plan (what are we testing, how are we testing it, what is what we want to achieved or why are we testing), environment/s setup, scripts to execute tests, and of course who is going to participate.

Once this is done, we can think what money we need to achieve this.

Okay, here goes.  Please help me flesh this plan out.  Or alternately if you think its stupid, you could just tell me that.  Although I would appreciate it if you were a little bit polite when you made fun of my plan.

1) What are we testing?
     For the first phase I propose that we attempt a stress test of the test network WAN.  This is a real world test of what we will be able to do when 2.0 launches. 
2) What is our goal?
     I propose a goal of 1000tps sustained for 5 minutes. 
3) Why test? 
     First of all it will be good publicity to show what we can actually do, not just the theoretical limit.  Secondly, it will provide lots of information about how the binaries behave on different machines, and how the network deals with that amount of traffic.
4) What do we need?
   a) compiled binaries for windows, osx, and linux
   b) instructions on how to download, intall, and start the witness_node, cli_wallet, and a python script (lets focus on getting everyone to the startup screen on cli_wallet, and take care of account creation, and transaction spamming with a script.
   c) a script that will:
     1c) create and unlock a wallet
     2c) Import a known test private key to get a balance
     3c) create an account and upgrade it to lifetime status
     4c) spam transactions
     5c) keep an eye on account balance
     6c) withdraw vesting funds so testing can continue longer
     7c) log anything that might be interesting.  Both to screen and to file.
Alternately if we can set up a faucet the script can register the new accounts under a lifetime account, and our excess funds vesting will be in a centralized location.
It would be great if this script could be the interaction point for testers.  Asking any required information at the beginning, and displaying any information needed.  That would prevent testers from having to learn anything about the witness_node or cli_wallet
   d) Lots of people to test.  If installing 3 programs, and typing in a few commands sounds like something you can do, then you could help us make history. 

What does everyone think?  Is this a good enough starting point? 

If you can help us make this test a reality please let me know.  We already have binaries available, but will probably need them updated by the time we are ready to test.  I can help with instructions.  I could write the script, but it would take me 10 times longer than xeroc, and the final product would be 10 times worse than xerocs product would be.  I will also of course help run nodes for the test.  Both at home, and on some VPS's. 

Xeroc.  What do you think in regards to the script Idea I posted?  Would you be able to donate that script to the cause, or would you want to set a price up front?

Either way if we end up doing this, please consider tipping xeroc, cube, and maqifrnswa.  Please let me know if I have missed any other tip worthy contributors. 
Title: Re: Scheduling Proof of Scalability
Post by: liondani on September 08, 2015, 05:59:45 pm
     3c) create an account and upgrade it to lifetime status

to see if it works?
Title: Re: Scheduling Proof of Scalability
Post by: puppies on September 08, 2015, 06:04:12 pm
     3c) create an account and upgrade it to lifetime status

to see if it works?

No.  To save on fees.  The hardest part about testing so far has been getting your hands on enough core.  20bts per transaction adds up fast.  If the account is upgraded that is lowered to 4bts per transaction after rebate.  Since my goal is 1000tps for 5 minutes, if each node can handle 10tps for 5 minutes that would be a savings of 38k bts after the 10k bts fee to upgrade.  I hope that made sense.
Title: Re: Scheduling Proof of Scalability
Post by: abit on September 08, 2015, 07:14:25 pm
1. Current version of Graphene has sync issues, especially when the network is under pressure (when there is transaction flooding). I think it's better to do this wider test after the sync issues fixed.
2. Current test network looks like stable, because it is highly centralized -- 80% of witnesses are running on one or two VPS(es) which controlled by BM/CryptoNomex, every forks made by other witnesses are simply discarded. In a decentralized network it's harder to decide which fork is effective (or say longest?). We'd better prove that it's stable on a decentralized network first.
Title: Re: Scheduling Proof of Scalability
Post by: betax on September 08, 2015, 08:57:34 pm
To summarise:

It needs a test plan (what are we testing, how are we testing it, what is what we want to achieved or why are we testing), environment/s setup, scripts to execute tests, and of course who is going to participate.

Once this is done, we can think what money we need to achieve this.

Okay, here goes.  Please help me flesh this plan out.  Or alternately if you think its stupid, you could just tell me that.  Although I would appreciate it if you were a little bit polite when you made fun of my plan.

1) What are we testing?
     For the first phase I propose that we attempt a stress test of the test network WAN.  This is a real world test of what we will be able to do when 2.0 launches. 
2) What is our goal?
     I propose a goal of 1000tps sustained for 5 minutes. 
3) Why test? 
     First of all it will be good publicity to show what we can actually do, not just the theoretical limit.  Secondly, it will provide lots of information about how the binaries behave on different machines, and how the network deals with that amount of traffic.
4) What do we need?
   a) compiled binaries for windows, osx, and linux
   b) instructions on how to download, intall, and start the witness_node, cli_wallet, and a python script (lets focus on getting everyone to the startup screen on cli_wallet, and take care of account creation, and transaction spamming with a script.
   c) a script that will:
     1c) create and unlock a wallet
     2c) Import a known test private key to get a balance
     3c) create an account and upgrade it to lifetime status
     4c) spam transactions
     5c) keep an eye on account balance
     6c) withdraw vesting funds so testing can continue longer
     7c) log anything that might be interesting.  Both to screen and to file.
Alternately if we can set up a faucet the script can register the new accounts under a lifetime account, and our excess funds vesting will be in a centralized location.
It would be great if this script could be the interaction point for testers.  Asking any required information at the beginning, and displaying any information needed.  That would prevent testers from having to learn anything about the witness_node or cli_wallet
   d) Lots of people to test.  If installing 3 programs, and typing in a few commands sounds like something you can do, then you could help us make history. 

What does everyone think?  Is this a good enough starting point? 

If you can help us make this test a reality please let me know.  We already have binaries available, but will probably need them updated by the time we are ready to test.  I can help with instructions.  I could write the script, but it would take me 10 times longer than xeroc, and the final product would be 10 times worse than xerocs product would be.  I will also of course help run nodes for the test.  Both at home, and on some VPS's. 

Xeroc.  What do you think in regards to the script Idea I posted?  Would you be able to donate that script to the cause, or would you want to set a price up front?

Either way if we end up doing this, please consider tipping xeroc, cube, and maqifrnswa.  Please let me know if I have missed any other tip worthy contributors.

Very good !!

Just small extra points about the load testing script, If we identify the account names beforehand, nobody should ran out of core if is not for transaction fees, as we can all send each other.

I don't know if 1c is possible as you need to interact with an unlocked account.

Handle errors, for example if we don't have enough balance the script will fail (5c)

If 1c is possible we could build scripts that pick up usernames ids / balances from a server, and build an image (docker containers) that on start will pickup a user to setup.

1. Current version of Graphene has sync issues, especially when the network is under pressure (when there is transaction flooding). I think it's better to do this wider test after the sync issues fixed.
2. Current test network looks like stable, because it is highly centralized -- 80% of witnesses are running on one or two VPS(es) which controlled by BM/CryptoNomex, every forks made by other witnesses are simply discarded. In a decentralized network it's harder to decide which fork is effective (or say longest?). We'd better prove that it's stable on a decentralized network first.

^^^^^ That will help too :) ^^^^^
Title: Re: Scheduling Proof of Scalability
Post by: liondani on September 08, 2015, 09:57:31 pm
     3c) create an account and upgrade it to lifetime status

to see if it works?

No.  To save on fees.  The hardest part about testing so far has been getting your hands on enough core.  20bts per transaction adds up fast.  If the account is upgraded that is lowered to 4bts per transaction after rebate.  Since my goal is 1000tps for 5 minutes, if each node can handle 10tps for 5 minutes that would be a savings of 38k bts after the 10k bts fee to upgrade.  I hope that made sense.

Make sense (  i didn't make the math)

PS off topic:
now I realize the fees....  4 bts?(!)  (instead of 0.1 or 0.5 currently)
What if the market cap increases 10 fold for example or even more ?Is it not to much? (I assume "delegates" can change that) What about dynamic fees? A percentage like 0.2% for example?
Would it not be better so we are not in the position every now and the to change the fees?

Title: Re: Scheduling Proof of Scalability
Post by: clayop on September 08, 2015, 11:18:21 pm
I'm curious that how many transactions init witnesses can process. I tried to flood network over 10 tps with two VPS but tps did not pass 10.
Title: Re: Scheduling Proof of Scalability
Post by: puppies on September 08, 2015, 11:31:16 pm


Very good !!

Just small extra points about the load testing script, If we identify the account names beforehand, nobody should ran out of core if is not for transaction fees, as we can all send each other.

I don't know if 1c is possible as you need to interact with an unlocked account.

Handle errors, for example if we don't have enough balance the script will fail (5c)

If 1c is possible we could build scripts that pick up usernames ids / balances from a server, and build an image (docker containers) that on start will pickup a user to setup.


If its possible for everyone to send transactions from the exact same account then we could skip all of that, and just have the script import the private keys of a throwaway account.  I am just trying to think of ways to make joining in the test very easy, so that people that are not super technical can help out.

Of course BMs new network protocol may complicate this.  It may not be possible to get the combination of witness node, cli wallet, and relay installed and running on windows in an easy to follow, hard to mess up way.
Title: Re: Scheduling Proof of Scalability
Post by: cube on September 09, 2015, 12:19:02 am
Okay, here goes.  Please help me flesh this plan out.  Or alternately if you think its stupid, you could just tell me that.  Although I would appreciate it if you were a little bit polite when you made fun of my plan.

Thanks for an excellent plan.  I have a few comments/points I like to add:

Quote
1) What are we testing?
     For the first phase I propose that we attempt a stress test of the test network WAN.  This is a real world test of what we will be able to do when 2.0 launches. 

Do you mean LAN (Local Area Network) test to begin with?  If LAN test failed to achieve 100K tps then we know WAN test will fail too.  Besides, WAN depends on the individual witness node's speed and so there are more variables in the testing.

Quote
2) What is our goal?
     I propose a goal of 1000tps sustained for 5 minutes. 

Let's start with 5mins.  And let's see how long the network can hold before it breaks.

Quote
3) Why test? 
     First of all it will be good publicity to show what we can actually do, not just the theoretical limit.  Secondly, it will provide lots of information about how the binaries behave on different machines, and how the network deals with that amount of traffic.

Yes, an excellent way to 'show off' graphene's 100K tps real power.

Quote
4) What do we need?
   a) compiled binaries for windows, osx, and linux

I think for a start, and to make thing easier, choose one - Windows and/or Linux since they are available now.

Quote
   b) instructions on how to download, intall, and start the witness_node, cli_wallet, and a python script (lets focus on getting everyone to the startup screen on cli_wallet, and take care of account creation, and transaction spamming with a script.
This would be especially useful when we start running tests for WAN.

Quote
   c) a script that will:
     1c) create and unlock a wallet
     2c) Import a known test private key to get a balance
     3c) create an account and upgrade it to lifetime status
     4c) spam transactions
     5c) keep an eye on account balance
     6c) withdraw vesting funds so testing can continue longer

Having scripts to simulate as close to 'real life' transactions as possible - this list is excellent for that. But since writing scripts take time - a whole lot of time, I suggest we forcus on scripting tasks that most frequently used by users and those txs that take up most resources - cpu/mem/network. We want to see how much stress the network can bear. Import private keys are sensitive and private stuff, and so we may not want to script it.  Perhaps simplifying to:

     1c) create and unlock a wallet
     3c) create an account and upgrade it to lifetime status
     4c) transfer transactions

Quote
     7c) log anything that might be interesting.  Both to screen and to file.

Yes, we like to log the good results and show it to the world.

Quote
Alternately if we can set up a faucet the script can register the new accounts under a lifetime account, and our excess funds vesting will be in a centralized location.
It would be great if this script could be the interaction point for testers.  Asking any required information at the beginning, and displaying any information needed.  That would prevent testers from having to learn anything about the witness_node or cli_wallet

   d) Lots of people to test.  If installing 3 programs, and typing in a few commands sounds like something you can do, then you could help us make history. 

I am not sure if it is the right time now to 'teach' users who are not technical since preparing documentation/tutorials/simplying installation will take up lots of time and effort. But if we can accomplish that, it would be great.

Quote
If you can help us make this test a reality please let me know.  We already have binaries available, but will probably need them updated by the time we are ready to test.  I can help with instructions.  I could write the script, but it would take me 10 times longer than xeroc, and the final product would be 10 times worse than xerocs product would be.  I will also of course help run nodes for the test.  Both at home, and on some VPS's. 

Generating dummy user accounts and transactions will probably involve interacting with a database.  How easy to modify xeroc's scripts for that?

I am considering jmeter.  How about adding a custom plug-in to jmeter to perform the necessary rpc callings (these create the commands to the witness_node daemon to perform the transactions)?

Quote
Xeroc.  What do you think in regards to the script Idea I posted?  Would you be able to donate that script to the cause, or would you want to set a price up front?

I would like to hear from xeroc too.

No.  To save on fees.  The hardest part about testing so far has been getting your hands on enough core.  20bts per transaction adds up fast.  If the account is upgraded that is lowered to 4bts per transaction after rebate.  Since my goal is 1000tps for 5 minutes, if each node can handle 10tps for 5 minutes that would be a savings of 38k bts after the 10k bts fee to upgrade.  I hope that made sense.

If tx-fees is a concern, how about starting a new testnet for the load testing?  We can modify the source codes to generate tons and tons of CORE for the tests.

1. Current version of Graphene has sync issues, especially when the network is under pressure (when there is transaction flooding). I think it's better to do this wider test after the sync issues fixed.
2. Current test network looks like stable, because it is highly centralized -- 80% of witnesses are running on one or two VPS(es) which controlled by BM/CryptoNomex, every forks made by other witnesses are simply discarded. In a decentralized network it's harder to decide which fork is effective (or say longest?). We'd better prove that it's stable on a decentralized network first.

BM has fixed the sync issues.  For witnesses, we could create a number of them and spread them out to different computers/instances.
Title: Re: Scheduling Proof of Scalability
Post by: puppies on September 09, 2015, 12:44:52 am
Thanks for the response cube. 

I was thinking that it would make more sense to start with a wide area network test.  I know this will not reach the actual potential throughput of graphene, and will be limited by the network speed, and hardware it is running on.  It would utilize volunteers rather than raising funds.  That is what would necessitate the ease of use, and multiple OS's.  This is also the reason why I would have it create an account.  So that users wouldn't have to. 

I am leaning towards thinking that we should stress test the existing test network rather than creating a new network to benchmark graphene. but if you think we should do the benchmark first and then work on improving the test networks performance later, thats not a big deal to me at all.
Title: Re: Scheduling Proof of Scalability
Post by: xeroc on September 09, 2015, 05:57:16 am
Sorry for my late reply.

2) What is our goal?
     I propose a goal of 1000tps sustained for 5 minutes. 
Don't know if that may break the stats page. It's really put together quickly
and needs some more testing. Maybe even add BM's relay nodejs to harden it.

Quote
4) What do we need?
   a) compiled binaries for windows, osx, and linux
   b) instructions on how to download, intall, and start the witness_node,
      cli_wallet, and a python script (lets focus on getting everyone to the
      startup screen on cli_wallet, and take care of account creation, and
      transaction spamming with a script.
   c) a script that will:
     1c) create and unlock a wallet
     2c) Import a known test private key to get a balance
     3c) create an account and upgrade it to lifetime status
     4c) spam transactions
     5c) keep an eye on account balance
     6c) withdraw vesting funds so testing can continue longer
     7c) log anything that might be interesting.  Both to screen and to file.
Funds can only be important as a whole from a private key. Hence you either need
someone that can send you some funds or your need to distribute privkeys that
hold funds.

account creation is only possible if someone already registered pays the
registration fee. you cannot simply register on your own as far as I know.

Not sure how to work around this. Maybe have a global account registered and its
priv key distributed in the script. that way testers can register themselves
with that account.


Quote
What does everyone think?  Is this a good enough starting point? 
Absolutely. My problem is that I can only start working on this next week.
Title: Re: Scheduling Proof of Scalability
Post by: cube on September 09, 2015, 08:58:38 am
Thanks for the response cube. 

I was thinking that it would make more sense to start with a wide area network test.  I know this will not reach the actual potential throughput of graphene, and will be limited by the network speed, and hardware it is running on.  It would utilize volunteers rather than raising funds.  That is what would necessitate the ease of use, and multiple OS's.  This is also the reason why I would have it create an account.  So that users wouldn't have to. 

I am leaning towards thinking that we should stress test the existing test network rather than creating a new network to benchmark graphene. but if you think we should do the benchmark first and then work on improving the test networks performance later, thats not a big deal to me at all.

Thanks for clarifying.

I understand your objective now - to improve the test networks performance first and utilise greater number of volunteers. 

I think there are a few factors affecting the test network performance in a WAN - the graphene software (with its algorithm), the network limitation of individual nodes, and the cpu+ram resources of the individual nodes.  Assuming a scenario of a slow performance, it would be difficult to pin down where is the source of the slow down because of the various factors.  It could be because some witness nodes are 'far away' (ie many internet network hops),  some having a slow internet bandwidth/high latency connections, some running on old and slow computers, or some flaws in the graphene software codes.

However, if we can isolate the factors contributing from individual nodes, it become easier to say eg 'Ah the software starts to slow down when it is reaching X tps. I am pretty sure it is not due to the machines or the physical network (because they are top machines running on local fast network)'. 

BTW, I am not suggesting forgoing WAN test. In fact, we could proceed to do WAN test immediately after LAN test if resources and time allow.

We would still need a lot of help from volunteers eg vetting dummy account names and transactions in a database (to be used to blast the load).


Not sure how to work around this. Maybe have a global account registered and its
priv key distributed in the script. that way testers can register themselves
with that account.
Quote
What does everyone think?  Is this a good enough starting point? 
Absolutely. My problem is that I can only start working on this next week.

Great to have you with us.  Load testing is not an easy nor simple undetaking.  We should do it well and this takes time.  With your participation, we would be closer to the goal. :)
Title: Re: Scheduling Proof of Scalability
Post by: BunkerChainLabs-DataSecurityNode on September 09, 2015, 04:44:12 pm
I don't know if there is anything to learn from this.. but I thought I would share it in case it gives you all some ideas on ways to stress test the network:

https://bitcoinmagazine.com/21842/coinwallet-begins-pre-test-bitcoin-network-schedules-largest-stress-test-begin-september-10/

Anybody with bitcoins might want to brace yourselves for tomorrows bombardment.

Hope this helps in some way.
Title: Re: Scheduling Proof of Scalability
Post by: puppies on September 09, 2015, 06:33:10 pm
That makes perfect sense Cube.  I think you have a better testing methodology that will result in a better final product.  If there are any simple tasks I can do please let me know.

I was thinking I would spin up a couple of instances and see what type of volume I could broadcast from a single machine using current spam techniques.  I get the feeling that we will need to develop better means of flooding.  Something like the build in flood_network command but with more sustained flooding.
Title: Re: Scheduling Proof of Scalability
Post by: kenCode on September 09, 2015, 07:11:53 pm
I don't know if there is anything to learn from this.. but I thought I would share it in case it gives you all some ideas on ways to stress test the network:

https://bitcoinmagazine.com/21842/coinwallet-begins-pre-test-bitcoin-network-schedules-largest-stress-test-begin-september-10/

Anybody with bitcoins might want to brace yourselves for tomorrows bombardment.

Hope this helps in some way.

Tuck check this out ^^
Title: Re: Scheduling Proof of Scalability
Post by: abit on September 09, 2015, 09:51:39 pm
https://docs.google.com/spreadsheets/d/1amqTZZ0dllmEEONW6qvc_07CE1mqcEk0SYRPU4phgEc/edit?disco=AAAAAYfKzkI
1. Current version of Graphene has sync issues, especially when the network is under pressure (when there is transaction flooding). I think it's better to do this wider test after the sync issues fixed.
2. Current test network looks like stable, because it is highly centralized -- 80% of witnesses are running on one or two VPS(es) which controlled by BM/CryptoNomex, every forks made by other witnesses are simply discarded. In a decentralized network it's harder to decide which fork is effective (or say longest?). We'd better prove that it's stable on a decentralized network first.

BM has fixed the sync issues.  For witnesses, we could create a number of them and spread them out to different computers/instances.
Not yet fixed, at least when I wrote that post yesterday. 

Will see whether this commit https://github.com/cryptonomex/graphene/commit/ff2db08475908fad5e36df23d6c50256b9ab13f7 solves some of the issues. I'm running with a version which included that commit in current testnet now.
Title: Re: Scheduling Proof of Scalability
Post by: Thom on September 09, 2015, 10:16:32 pm
 +5% +5% +5% +5%

To puppies & cube! You have cleared my fog on what you're trying to do (well, maybe still some mist in the air concerning the poll).

I'll help any way I can.

I will be more informed tomorrow at this time after I speak with wackou about our backbone and how we'll set it up to reduce latency and protect nodes from direct DDOS attacks.

A well balanced and thought out plan for the distribution of seed and backbone nodes should help minimize network / connection latencies.
Title: Re: Scheduling Proof of Scalability
Post by: bytemaster on September 09, 2015, 10:22:41 pm
There are two aspects to scalability:

1. How fast can a node actually process transactions / blocks
2. How well can a P2P protocol keep up with the flooding

These are two separate and independent issues.    If we wish to do a proof-of-scalability test, then I would suggest we remove the P2P code from the equation (for the most part).   We can have a single node be ALL witnesses and then flood the network with as many transactions as possible.     It should be obvious that a "bad P2P protocol" is a different kind of bottleneck than CPU limitations of processing. 

We are working on revamping the communication protocol in a MAJOR way after BTS 2.0 comes out. 
Title: Re: Scheduling Proof of Scalability
Post by: Ander on September 09, 2015, 10:51:57 pm

PS off topic:
now I realize the fees....  4 bts?(!)  (instead of 0.1 or 0.5 currently)
What if the market cap increases 10 fold for example or even more ?Is it not to much? (I assume "delegates" can change that) What about dynamic fees? A percentage like 0.2% for example?
Would it not be better so we are not in the position every now and the to change the fees?

I believe that bytemaster mentioned '20 cent fees', so I figured they would vary based on the market price.  So if the market cap went way up, it would cost less BTS. 
Is this accurate?
Title: Re: Scheduling Proof of Scalability
Post by: rnglab on September 10, 2015, 05:41:25 am
There are two aspects to scalability:

1. How fast can a node actually process transactions / blocks
2. How well can a P2P protocol keep up with the flooding

These are two separate and independent issues. If we wish to do a proof-of-scalability test, then I would suggest we remove the P2P code from the equation (for the most part).   We can have a single node be ALL witnesses and then flood the network with as many transactions as possible.     It should be obvious that a "bad P2P protocol" is a different kind of bottleneck than CPU limitations of processing. 

We are working on revamping the communication protocol in a MAJOR way after BTS 2.0 comes out.

Thanks for the update BM.
I think there's no rush to prove scalability while you devs are working hard on an even better network protocol for graphene.
More so considering the dynamism achieved to release 2.0 in the meanwhile with fallback or alternative network configurations, as  explained here by Dan.  (https://bitsharestalk.org/index.php/topic,18299.msg235392.html#msg235392)
If flooding 101 witnesses on a single AWS node helps to find CPU or other non network related limitations, let's do it. For a test to serve as a public proof of scalability (and fud killer) I think we should isolate only external factors, and this may better fit the stage when new communication protocol is almost ready.

As someone who remembers both Novembers  (just lurking learning here till believing I had some value to input other than mining and funding)... as early adopter I see the aptitude and integrity that stands on this project, clear as water.
From that perspective I wouldn't mind to see Bitshares 2.0 relaying on a temporary non-distributed mode towards an improved network.

but as long time stakeholder who saw many other projects taking advantage from lessons learned from Bitshares long experience and transparency just to try to put it down, I wouldn't like to give a chance for more "centralization" FUD if it is worth the effort to elude that from now on.

Back to the point, I wonder how far is the old p2p protocol from being able to manage let's say ~1000 tps on 5 second blocks. Also wonder if getting there means diverting too many resources from new protocol development.

I think It would be easier to understand for the most (and harder to criticize) if the first release remains fully distributed at the expense of keeping TPS low enough to keep it stable over the old network protocol if they meet the initial requirements. It could still even break a new mark on TPS in a distributed way  (and keep away Ripple comparisons to name just one possible attack).

Then the upcoming revamped communication protocol and increment on TPS would be an extra incentive, another BIG announce while already standing over on a stable platform. An announce that everyone could check and even help to design or code if you like.

I don't know if we could accomplish similar uptake going the opposite way, that being starting just with one central node, many nodes on the same server/location, or a central relay node.

If  this is feasible, we could even keep going with the public load test, not the vLAN  proof of scalability yet but a proof of real launchtime performance for 2.0, with a clear advice on that it will be just a fraction of what is coming,

Also if we take this path, publicly running the test that that BM suggested above about taking away the p2p protocol for scalability measurement could be of much greater impact, because having an actual network  performance will help to show it as real scalability and not just theoretical.

Just my to BTS
Title: Re: Scheduling Proof of Scalability
Post by: xeroc on September 10, 2015, 06:00:31 am

PS off topic:
now I realize the fees....  4 bts?(!)  (instead of 0.1 or 0.5 currently)
What if the market cap increases 10 fold for example or even more ?Is it not to much? (I assume "delegates" can change that) What about dynamic fees? A percentage like 0.2% for example?
Would it not be better so we are not in the position every now and the to change the fees?

I believe that bytemaster mentioned '20 cent fees', so I figured they would vary based on the market price.  So if the market cap went way up, it would cost less BTS. 
Is this accurate?
Fees are a parameter that can be defined by shareholders via the committe
Title: Re: Scheduling Proof of Scalability
Post by: cube on September 10, 2015, 06:19:50 am
+5% +5% +5% +5%

To puppies & cube! You have cleared my fog on what you're trying to do (well, maybe still some mist in the air concerning the poll).

I'll help any way I can.
...
Great to have you!  We are gathering momentum as more help are coming in.   :)

..
If we wish to do a proof-of-scalability test, then I would suggest we remove the P2P code from the equation (for the most part).   We can have a single node be ALL witnesses and then flood the network with as many transactions as possible.     It should be obvious that a "bad P2P protocol" is a different kind of bottleneck than CPU limitations of processing. 

We are working on revamping the communication protocol in a MAJOR way after BTS 2.0 comes out.

Thanks for the input and providing the right focus on the load test.   We should indeed do away with the p2p WAN testing since the p2p protocol will be undergoing a major upgrade after 2.0. 

It looks like we could proceed with the LAN test consisting a single node where all the witnesses are located and where the processing of transactions are done.  The 'blasting' part will be off loaded to another one or more computers/instances in the same LAN with a gigabit bandwidth.  The transactions will be sent from these computers/instances to the root node via the new Relay mechanism.  If we can do this, we can tell the world bts 2.0 can indeed process 100K tps (or better) set aside the natural speed/latency limitation of the internet.
Title: Re: Scheduling Proof of Scalability
Post by: kenCode on September 10, 2015, 07:16:59 am
@nethyb @xeroc @kenCode @cube @sudo @phillyguy @emailtooaj @onceuponatime @godzirra @rnglab @bobmaloney @ccedk @liondani @tonyk @Akado @mike623317 @CLains @DataSecurityNode @puppies @clayop @betax @abit @chryspano @Slappy @Xeldal @merockstar @tbone @Thom @Fox @aloha
 
Removing the WAN limitations could prove our scalability beyond 100K tps.. I'm looking forward to this AWS test.
It would be great if we could all record it on video with our phones. Split screen CLI and xeroc's tps gauge. Other ideas welcome.
 
This is 30 people so far willing to put in some BTS for this 100K tps LAN test.
Anybody else?
Title: Re: Scheduling Proof of Scalability
Post by: cass on September 10, 2015, 09:06:51 am
There are two aspects to scalability:

1. How fast can a node actually process transactions / blocks
2. How well can a P2P protocol keep up with the flooding

These are two separate and independent issues. If we wish to do a proof-of-scalability test, then I would suggest we remove the P2P code from the equation (for the most part).   We can have a single node be ALL witnesses and then flood the network with as many transactions as possible.     It should be obvious that a "bad P2P protocol" is a different kind of bottleneck than CPU limitations of processing. 

We are working on revamping the communication protocol in a MAJOR way after BTS 2.0 comes out.

Thanks for the update BM.
I think there's no rush to prove scalability while you devs are working hard on an even better network protocol for graphene.
More so considering the dynamism achieved to release 2.0 in the meanwhile with fallback or alternative network configurations, as  explained here by Dan.  (https://bitsharestalk.org/index.php/topic,18299.msg235392.html#msg235392)
If flooding 101 witnesses on a single AWS node helps to find CPU or other non network related limitations, let's do it. For a test to serve as a public proof of scalability (and fud killer) I think we should isolate only external factors, and this may better fit the stage when new communication protocol is almost ready.

As someone who remembers both Novembers  (just lurking learning here till believing I had some value to input other than mining and funding)... as early adopter I see the aptitude and integrity that stands on this project, clear as water.
From that perspective I wouldn't mind to see Bitshares 2.0 relaying on a temporary non-distributed mode towards an improved network.

but as long time stakeholder who saw many other projects taking advantage from lessons learned from Bitshares long experience and transparency just to try to put it down, I wouldn't like to give a chance for more "centralization" FUD if it is worth the effort to elude that from now on.

Back to the point, I wonder how far is the old p2p protocol from being able to manage let's say ~1000 tps on 5 second blocks. Also wonder if getting there means diverting too many resources from new protocol development.

I think It would be easier to understand for the most (and harder to criticize) if the first release remains fully distributed at the expense of keeping TPS low enough to keep it stable over the old network protocol if they meet the initial requirements. It could still even break a new mark on TPS in a distributed way  (and keep away Ripple comparisons to name just one possible attack).

Then the upcoming revamped communication protocol and increment on TPS would be an extra incentive, another BIG announce while already standing over on a stable platform. An announce that everyone could check and even help to design or code if you like.

I don't know if we could accomplish similar uptake going the opposite way, that being starting just with one central node, many nodes on the same server/location, or a central relay node.

If  this is feasible, we could even keep going with the public load test, not the vLAN  proof of scalability yet but a proof of real launchtime performance for 2.0, with a clear advice on that it will be just a fraction of what is coming,

Also if we take this path, publicly running the test that that BM suggested above about taking away the p2p protocol for scalability measurement could be of much greater impact, because having an actual network  performance will help to show it as real scalability and not just theoretical.

Just my to BTS

 +5%
Title: Re: Scheduling Proof of Scalability
Post by: vegolino on September 10, 2015, 11:14:21 am
You can count me in for 5000 BTS.  :)
Title: Re: Scheduling Proof of Scalability
Post by: tbone on September 10, 2015, 01:13:19 pm
I'm in for 5000 BTS.
Title: Re: Scheduling Proof of Scalability
Post by: Fox on September 10, 2015, 04:04:12 pm
For this single host witness + multiple transaction nodes testnet, can we use the proposal/vote functionality to alter some parameters of the existing protocol to support a better test environment?  If memory serves correctly, this may not be feasible due to a two week delay in an approved proposal going live by the then current set of witnesses.  If true, perhaps we need a new genesis for the proposed testnet.

Title: Re: Scheduling Proof of Scalability
Post by: bytemaster on September 10, 2015, 07:03:53 pm
For this single host witness + multiple transaction nodes testnet, can we use the proposal/vote functionality to alter some parameters of the existing protocol to support a better test environment?  If memory serves correctly, this may not be feasible due to a two week delay in an approved proposal going live by the then current set of witnesses.  If true, perhaps we need a new genesis for the proposed testnet.

    Parameters I feel should be altered:
    • Operation Fees: 0.0 BTS (The goal is spam the network, so let's not burn the fees, keep CORE flowing)
    • Maintenance Period: 24 hours (witnesses need not change, this operation is resource intensive and out of scope for this test)

Parameter updating does not take 2 weeks in the current test network.  It may be more like 2 hours, someone can probably figure it out by looking at chain props or genesis file. 

A new test net with 0 fees would be the easiest way to test this.


Title: Re: Scheduling Proof of Scalability
Post by: Akado on September 15, 2015, 08:37:18 pm
There are two aspects to scalability:

1. How fast can a node actually process transactions / blocks
2. How well can a P2P protocol keep up with the flooding

These are two separate and independent issues.    If we wish to do a proof-of-scalability test, then I would suggest we remove the P2P code from the equation (for the most part).   We can have a single node be ALL witnesses and then flood the network with as many transactions as possible.     It should be obvious that a "bad P2P protocol" is a different kind of bottleneck than CPU limitations of processing. 

We are working on revamping the communication protocol in a MAJOR way after BTS 2.0 comes out.

Ok so we should focus on point 1. How fast can a node actually process transactions / blocks that's a start. Does everyone agree with this or have other suggestions? Need Cube or someone who understands this stuff to share some info.

I also assume we need 1 script or more that do the following:

Step one: Create dummy account
Step two: send transaction

or

Already have multiple accounts previously
Get each of those accounts to perform a transaction
This method seems better than the one above, simply by the fact that on the first method an account would need to be created, get a deposit and then perform a transaction, then do it all again. That's three operations. While if we did the first two operations before, we would only need to perform one during the test.


I thought of a cycle first but I think that doesn't make any sense unless one machine could perform several instances of that script at the same time? Or the script gets access to hundreds of accounts at the same time and performs multiple transactions at the same time? i can't think of a way right now but once again I'm just a beginner coder. Thought about doing cycles but that way would imply get access to multiple accounts at the same time and I don't see how to do that. Sometimes even having different wallets on teh same computer can mess things up a little. No idea how to do this. I'll leave this to the experts
Title: Re: Scheduling Proof of Scalability
Post by: kenCode on September 24, 2015, 09:23:28 am
via: https://bitshares.org/technology/
"the BitShares network can confirm transactions in an average of just 1 second, limited only by the speed of light"
 
100K tps should be the minimum. if those AWS instances were top of the line servers all communicating via fiber (10Gbps-1Tbps+) then we can see what the protocol etc is truly capable of (in an ideal environment yes, but it proves what we are touting and then some).
Title: Re: Scheduling Proof of Scalability
Post by: bytemaster on September 24, 2015, 12:36:18 pm
via: https://bitshares.org/technology/
"the BitShares network can confirm transactions in an average of just 1 second, limited only by the speed of light"
 
100K tps should be the minimum. if those AWS instances were top of the line servers all communicating via fiber (10Gbps-1Tbps+) then we can see what the protocol etc is truly capable of (in an ideal environment yes, but it proves what we are touting and then some).

In our recent flooding tests what I observed via profiling is that networking code was utilizing about 10x more CPU than the blockchain code.    We also have the slight problem of having to apply every transaction THREE TIMES at the moment, once upon receipt, once upon building the block, and once upon applying the finished block.   

Lastly, we assume an infrastructure for parallel signature verification that does not exist right now.   So our biggest challenge in hitting 100K in real world tests are:

1. generating that many transactions
2. validating that many signatures
3. network communication bottlenecks

I suppose we could say that graphene is like Intel advertizing that their CPUs make the internet faster while you still have a dialup modem. 
Title: Re: Scheduling Proof of Scalability
Post by: cube on October 06, 2015, 09:18:34 am
After studying the best possible load testing tool that can work for a low budget, I decided to go for jmeter.  (See https://university.utest.com/introduction-to-load-testing-with-apache-jmeter)

The learning curve for jmeter is pretty ok but to customise it for graphene testing, I need to pick up java programming from scratch.  It took me a while to be familiarise with java, as well as the protocol of graphene for which the testing scripts have to interact.  I managed to a simple test plan for graphene with the following:

1) Creating User Account with Brain Key
2) Transfering fund

The testing infrastructure is developed.  New tests with different operations can be developed.  Below are some screenshots.  I will start a new thread to provide a simple guide on using it.

Edit: See new thread at https://bitsharestalk.org/index.php/topic,18768.msg241679.html#msg241679

(http://graphene.cubeconnex.com/download/graphene-jmeter-load-test0.jpg)
(http://graphene.cubeconnex.com/download/graphene-jmeter-load-test1.jpg)