0 Members and 1 Guest are viewing this topic.
via: https://bitshares.org/technology/ "the BitShares network can confirm transactions in an average of just 1 second, limited only by the speed of light" 100K tps should be the minimum. if those AWS instances were top of the line servers all communicating via fiber (10Gbps-1Tbps+) then we can see what the protocol etc is truly capable of (in an ideal environment yes, but it proves what we are touting and then some).
There are two aspects to scalability:1. How fast can a node actually process transactions / blocks2. How well can a P2P protocol keep up with the floodingThese are two separate and independent issues. If we wish to do a proof-of-scalability test, then I would suggest we remove the P2P code from the equation (for the most part). We can have a single node be ALL witnesses and then flood the network with as many transactions as possible. It should be obvious that a "bad P2P protocol" is a different kind of bottleneck than CPU limitations of processing. We are working on revamping the communication protocol in a MAJOR way after BTS 2.0 comes out.
For this single host witness + multiple transaction nodes testnet, can we use the proposal/vote functionality to alter some parameters of the existing protocol to support a better test environment? If memory serves correctly, this may not be feasible due to a two week delay in an approved proposal going live by the then current set of witnesses. If true, perhaps we need a new genesis for the proposed testnet. Parameters I feel should be altered:Operation Fees: 0.0 BTS (The goal is spam the network, so let's not burn the fees, keep CORE flowing)Maintenance Period: 24 hours (witnesses need not change, this operation is resource intensive and out of scope for this test)
Quote from: bytemaster on September 09, 2015, 10:22:41 pmThere are two aspects to scalability:1. How fast can a node actually process transactions / blocks2. How well can a P2P protocol keep up with the floodingThese are two separate and independent issues. If we wish to do a proof-of-scalability test, then I would suggest we remove the P2P code from the equation (for the most part). We can have a single node be ALL witnesses and then flood the network with as many transactions as possible. It should be obvious that a "bad P2P protocol" is a different kind of bottleneck than CPU limitations of processing. We are working on revamping the communication protocol in a MAJOR way after BTS 2.0 comes out. Thanks for the update BM.I think there's no rush to prove scalability while you devs are working hard on an even better network protocol for graphene.More so considering the dynamism achieved to release 2.0 in the meanwhile with fallback or alternative network configurations, as explained here by Dan. If flooding 101 witnesses on a single AWS node helps to find CPU or other non network related limitations, let's do it. For a test to serve as a public proof of scalability (and fud killer) I think we should isolate only external factors, and this may better fit the stage when new communication protocol is almost ready.As someone who remembers both Novembers (just lurking learning here till believing I had some value to input other than mining and funding)... as early adopter I see the aptitude and integrity that stands on this project, clear as water.From that perspective I wouldn't mind to see Bitshares 2.0 relaying on a temporary non-distributed mode towards an improved network.but as long time stakeholder who saw many other projects taking advantage from lessons learned from Bitshares long experience and transparency just to try to put it down, I wouldn't like to give a chance for more "centralization" FUD if it is worth the effort to elude that from now on.Back to the point, I wonder how far is the old p2p protocol from being able to manage let's say ~1000 tps on 5 second blocks. Also wonder if getting there means diverting too many resources from new protocol development.I think It would be easier to understand for the most (and harder to criticize) if the first release remains fully distributed at the expense of keeping TPS low enough to keep it stable over the old network protocol if they meet the initial requirements. It could still even break a new mark on TPS in a distributed way (and keep away Ripple comparisons to name just one possible attack).Then the upcoming revamped communication protocol and increment on TPS would be an extra incentive, another BIG announce while already standing over on a stable platform. An announce that everyone could check and even help to design or code if you like.I don't know if we could accomplish similar uptake going the opposite way, that being starting just with one central node, many nodes on the same server/location, or a central relay node.If this is feasible, we could even keep going with the public load test, not the vLAN proof of scalability yet but a proof of real launchtime performance for 2.0, with a clear advice on that it will be just a fraction of what is coming,Also if we take this path, publicly running the test that that BM suggested above about taking away the p2p protocol for scalability measurement could be of much greater impact, because having an actual network performance will help to show it as real scalability and not just theoretical.Just my to BTS
To puppies & cube! You have cleared my fog on what you're trying to do (well, maybe still some mist in the air concerning the poll).I'll help any way I can....
..If we wish to do a proof-of-scalability test, then I would suggest we remove the P2P code from the equation (for the most part). We can have a single node be ALL witnesses and then flood the network with as many transactions as possible. It should be obvious that a "bad P2P protocol" is a different kind of bottleneck than CPU limitations of processing. We are working on revamping the communication protocol in a MAJOR way after BTS 2.0 comes out.
Quote from: liondani on September 08, 2015, 09:57:31 pmPS off topic: now I realize the fees.... 4 bts?(!) (instead of 0.1 or 0.5 currently)What if the market cap increases 10 fold for example or even more ?Is it not to much? (I assume "delegates" can change that) What about dynamic fees? A percentage like 0.2% for example? Would it not be better so we are not in the position every now and the to change the fees?I believe that bytemaster mentioned '20 cent fees', so I figured they would vary based on the market price. So if the market cap went way up, it would cost less BTS. Is this accurate?
PS off topic: now I realize the fees.... 4 bts?(!) (instead of 0.1 or 0.5 currently)What if the market cap increases 10 fold for example or even more ?Is it not to much? (I assume "delegates" can change that) What about dynamic fees? A percentage like 0.2% for example? Would it not be better so we are not in the position every now and the to change the fees?
https://docs.google.com/spreadsheets/d/1amqTZZ0dllmEEONW6qvc_07CE1mqcEk0SYRPU4phgEc/edit?disco=AAAAAYfKzkIQuote from: abit on September 08, 2015, 07:14:25 pm1. Current version of Graphene has sync issues, especially when the network is under pressure (when there is transaction flooding). I think it's better to do this wider test after the sync issues fixed.2. Current test network looks like stable, because it is highly centralized -- 80% of witnesses are running on one or two VPS(es) which controlled by BM/CryptoNomex, every forks made by other witnesses are simply discarded. In a decentralized network it's harder to decide which fork is effective (or say longest?). We'd better prove that it's stable on a decentralized network first.BM has fixed the sync issues. For witnesses, we could create a number of them and spread them out to different computers/instances.
1. Current version of Graphene has sync issues, especially when the network is under pressure (when there is transaction flooding). I think it's better to do this wider test after the sync issues fixed.2. Current test network looks like stable, because it is highly centralized -- 80% of witnesses are running on one or two VPS(es) which controlled by BM/CryptoNomex, every forks made by other witnesses are simply discarded. In a decentralized network it's harder to decide which fork is effective (or say longest?). We'd better prove that it's stable on a decentralized network first.
I don't know if there is anything to learn from this.. but I thought I would share it in case it gives you all some ideas on ways to stress test the network:https://bitcoinmagazine.com/21842/coinwallet-begins-pre-test-bitcoin-network-schedules-largest-stress-test-begin-september-10/Anybody with bitcoins might want to brace yourselves for tomorrows bombardment. Hope this helps in some way.
Thanks for the response cube. I was thinking that it would make more sense to start with a wide area network test. I know this will not reach the actual potential throughput of graphene, and will be limited by the network speed, and hardware it is running on. It would utilize volunteers rather than raising funds. That is what would necessitate the ease of use, and multiple OS's. This is also the reason why I would have it create an account. So that users wouldn't have to. I am leaning towards thinking that we should stress test the existing test network rather than creating a new network to benchmark graphene. but if you think we should do the benchmark first and then work on improving the test networks performance later, thats not a big deal to me at all.
Not sure how to work around this. Maybe have a global account registered and itspriv key distributed in the script. that way testers can register themselveswith that account.QuoteWhat does everyone think? Is this a good enough starting point? Absolutely. My problem is that I can only start working on this next week.
What does everyone think? Is this a good enough starting point?
2) What is our goal? I propose a goal of 1000tps sustained for 5 minutes.
4) What do we need? a) compiled binaries for windows, osx, and linux b) instructions on how to download, intall, and start the witness_node, cli_wallet, and a python script (lets focus on getting everyone to the startup screen on cli_wallet, and take care of account creation, and transaction spamming with a script. c) a script that will: 1c) create and unlock a wallet 2c) Import a known test private key to get a balance 3c) create an account and upgrade it to lifetime status 4c) spam transactions 5c) keep an eye on account balance 6c) withdraw vesting funds so testing can continue longer 7c) log anything that might be interesting. Both to screen and to file.
Okay, here goes. Please help me flesh this plan out. Or alternately if you think its stupid, you could just tell me that. Although I would appreciate it if you were a little bit polite when you made fun of my plan.
1) What are we testing? For the first phase I propose that we attempt a stress test of the test network WAN. This is a real world test of what we will be able to do when 2.0 launches.
3) Why test? First of all it will be good publicity to show what we can actually do, not just the theoretical limit. Secondly, it will provide lots of information about how the binaries behave on different machines, and how the network deals with that amount of traffic.
4) What do we need? a) compiled binaries for windows, osx, and linux
b) instructions on how to download, intall, and start the witness_node, cli_wallet, and a python script (lets focus on getting everyone to the startup screen on cli_wallet, and take care of account creation, and transaction spamming with a script.
c) a script that will: 1c) create and unlock a wallet 2c) Import a known test private key to get a balance 3c) create an account and upgrade it to lifetime status 4c) spam transactions 5c) keep an eye on account balance 6c) withdraw vesting funds so testing can continue longer
7c) log anything that might be interesting. Both to screen and to file.
Alternately if we can set up a faucet the script can register the new accounts under a lifetime account, and our excess funds vesting will be in a centralized location.It would be great if this script could be the interaction point for testers. Asking any required information at the beginning, and displaying any information needed. That would prevent testers from having to learn anything about the witness_node or cli_wallet d) Lots of people to test. If installing 3 programs, and typing in a few commands sounds like something you can do, then you could help us make history.
If you can help us make this test a reality please let me know. We already have binaries available, but will probably need them updated by the time we are ready to test. I can help with instructions. I could write the script, but it would take me 10 times longer than xeroc, and the final product would be 10 times worse than xerocs product would be. I will also of course help run nodes for the test. Both at home, and on some VPS's.
Xeroc. What do you think in regards to the script Idea I posted? Would you be able to donate that script to the cause, or would you want to set a price up front?
No. To save on fees. The hardest part about testing so far has been getting your hands on enough core. 20bts per transaction adds up fast. If the account is upgraded that is lowered to 4bts per transaction after rebate. Since my goal is 1000tps for 5 minutes, if each node can handle 10tps for 5 minutes that would be a savings of 38k bts after the 10k bts fee to upgrade. I hope that made sense.
Very good !! Just small extra points about the load testing script, If we identify the account names beforehand, nobody should ran out of core if is not for transaction fees, as we can all send each other.I don't know if 1c is possible as you need to interact with an unlocked account.Handle errors, for example if we don't have enough balance the script will fail (5c)If 1c is possible we could build scripts that pick up usernames ids / balances from a server, and build an image (docker containers) that on start will pickup a user to setup.
Quote from: liondani on September 08, 2015, 05:59:45 pmQuote from: puppies on September 08, 2015, 05:30:22 pm 3c) create an account and upgrade it to lifetime statusto see if it works?No. To save on fees. The hardest part about testing so far has been getting your hands on enough core. 20bts per transaction adds up fast. If the account is upgraded that is lowered to 4bts per transaction after rebate. Since my goal is 1000tps for 5 minutes, if each node can handle 10tps for 5 minutes that would be a savings of 38k bts after the 10k bts fee to upgrade. I hope that made sense.
Quote from: puppies on September 08, 2015, 05:30:22 pm 3c) create an account and upgrade it to lifetime statusto see if it works?
3c) create an account and upgrade it to lifetime status
Quote from: betax on September 08, 2015, 02:34:12 pmTo summarise:It needs a test plan (what are we testing, how are we testing it, what is what we want to achieved or why are we testing), environment/s setup, scripts to execute tests, and of course who is going to participate.Once this is done, we can think what money we need to achieve this.Okay, here goes. Please help me flesh this plan out. Or alternately if you think its stupid, you could just tell me that. Although I would appreciate it if you were a little bit polite when you made fun of my plan.1) What are we testing? For the first phase I propose that we attempt a stress test of the test network WAN. This is a real world test of what we will be able to do when 2.0 launches. 2) What is our goal? I propose a goal of 1000tps sustained for 5 minutes. 3) Why test? First of all it will be good publicity to show what we can actually do, not just the theoretical limit. Secondly, it will provide lots of information about how the binaries behave on different machines, and how the network deals with that amount of traffic.4) What do we need? a) compiled binaries for windows, osx, and linux b) instructions on how to download, intall, and start the witness_node, cli_wallet, and a python script (lets focus on getting everyone to the startup screen on cli_wallet, and take care of account creation, and transaction spamming with a script. c) a script that will: 1c) create and unlock a wallet 2c) Import a known test private key to get a balance 3c) create an account and upgrade it to lifetime status 4c) spam transactions 5c) keep an eye on account balance 6c) withdraw vesting funds so testing can continue longer 7c) log anything that might be interesting. Both to screen and to file.Alternately if we can set up a faucet the script can register the new accounts under a lifetime account, and our excess funds vesting will be in a centralized location.It would be great if this script could be the interaction point for testers. Asking any required information at the beginning, and displaying any information needed. That would prevent testers from having to learn anything about the witness_node or cli_wallet d) Lots of people to test. If installing 3 programs, and typing in a few commands sounds like something you can do, then you could help us make history. What does everyone think? Is this a good enough starting point? If you can help us make this test a reality please let me know. We already have binaries available, but will probably need them updated by the time we are ready to test. I can help with instructions. I could write the script, but it would take me 10 times longer than xeroc, and the final product would be 10 times worse than xerocs product would be. I will also of course help run nodes for the test. Both at home, and on some VPS's. Xeroc. What do you think in regards to the script Idea I posted? Would you be able to donate that script to the cause, or would you want to set a price up front?Either way if we end up doing this, please consider tipping xeroc, cube, and maqifrnswa. Please let me know if I have missed any other tip worthy contributors.
To summarise:It needs a test plan (what are we testing, how are we testing it, what is what we want to achieved or why are we testing), environment/s setup, scripts to execute tests, and of course who is going to participate.Once this is done, we can think what money we need to achieve this.
Quote from: Xeldal on September 08, 2015, 02:12:36 pmI thought Dan said they already did a LAN test, and achieved something like 186k tps. Is there a reason we're trying to do this again? other than capturing it on video. I would think they would have some scripts for this already also.the WAN test might be more interesting. It sounds like there are still a great deal of optimizations that can be implemented. Is now the best time to spend money on powerful/connected servers, only to do it again later with better optimizations? Why can we not just use the existing test network in a organized fashion to capture this data. It may not produce the ultimate upper bounds but should still reveal bottlenecks and have something meaningful for video or whatever media angle you're interested in. Im not fully convinced is best to spend a bunch of money at this time. I'm willing to donate some personal funds to aid in this:1. I'm contributing with the following amount of BTS: 25002. N/A3. N/A but for any significant amount from BitsharesBreakout, at this time I'd like to hold off for a moment to get a better feel for the value derived and necessity for doing this now, and in the way described. Get some feed back from devs etc as to whether this is valuable for their work.I have to agree with this, we are testing for testing. Why focus in an scenario that we can build towards and not focus on the current world. Our testnet is a good way to prove what we can do now, with the current vps / servers we currently need. Probably best is to organise the testnet in a better way. If we are using Azure / AWS we can ensure we are all in the same region and using better connectivity. If you are using a home server you are still good to go.We need to ensure that we have enough funds to test . Most of us have used puppies /clayop autohotkey script or xerocs (I used xerocs as I was using putty) and quickly ran out. If xerocs script is modified so we distribute funds across all named participants, we might not ran out of CORE. Also ensure that we can quickly restart our witness if down.Maybe scripts should run a counter to verify all the transactions are sent / received.Once all this is organised, then we can experiment on scaling vertically and horizontally. More transactions, more clients / users to count votes, bigger vms.
I thought Dan said they already did a LAN test, and achieved something like 186k tps. Is there a reason we're trying to do this again? other than capturing it on video. I would think they would have some scripts for this already also.the WAN test might be more interesting. It sounds like there are still a great deal of optimizations that can be implemented. Is now the best time to spend money on powerful/connected servers, only to do it again later with better optimizations? Why can we not just use the existing test network in a organized fashion to capture this data. It may not produce the ultimate upper bounds but should still reveal bottlenecks and have something meaningful for video or whatever media angle you're interested in. Im not fully convinced is best to spend a bunch of money at this time. I'm willing to donate some personal funds to aid in this:1. I'm contributing with the following amount of BTS: 25002. N/A3. N/A but for any significant amount from BitsharesBreakout, at this time I'd like to hold off for a moment to get a better feel for the value derived and necessity for doing this now, and in the way described. Get some feed back from devs etc as to whether this is valuable for their work.
I thought Dan said they already did a LAN test, and achieved something like 186k tps. Is there a reason we're trying to do this again? other than capturing it on video. I would think they would have some scripts for this already also.
It sounds like there are still a great deal of optimizations that can be implemented. Is now the best time to spend money on powerful/connected servers, only to do it again later with better optimizations?
I could assist with scripting (when I have time) ... but I need a clear list of things you need implemented
I for one, would be happy to contribute some resources (donate some BTS) or spin up a number of AWS instances to help with load testing, and I'm sure many others in the community would also help if someone could develop the instructions or scripts to perform a private testnet load/tps test.Using AWS, you could effectively have a 10Gb private LAN with the ability to spin up a significant number of servers for a short period of time at reasonable cost using spot pricing.Here is an atricle on how someone has configured AWS instances to achieve 1 Million TPS For Just $1.68/Hour - http://highscalability.com/blog/2014/8/18/1-aerospike-server-x-1-amazon-ec2-instance-1-million-tps-for.htmlIf we chime and say we (as the community) would be willing to fund/support/provide resources for this test - it may encourage one us to take it on...
Why do people have to donate bts? We can demonstrate scalability in a test net.
2) Local Area Network would need to be in a gigabit environment so that we can test the possibility of 100K tps. nethyb mentioned about aws having such a network. That is cool. Depending on the load test script created, we may end up either a MS Windows or a Linux environment. The cost may vary slightly. We need to determine how much load (of the 100K tps) to offload in order to find out the optimal number of computers/aws instances to pump these transactions. We probably need to do a test to find out the max load a computer/instance can take.I am not sure if there are performance test scripts already present in the graphene test suite (I have not taken a look yet). If not, we will need to develop one - possibly modifying xeroc's python rpc suite. We will still need to write scripts that generate dummy accounts and dummy transactions. A big part of the work is right here.Edit: I just checked the graphene repository. I am afraid there is no such transaction test script/program available yet. We need to look for load test input and capture tool and use it to write our input scripts.
Quote from: cube on September 07, 2015, 11:14:51 pmI like to volunteer my time to drive this but we need more technical volunteers and donations. My guestimate is that the cost is much more than what is donated so far. Didn't we have BitSharesBreakout for this? If I didn't mistake myself on the name. I remember it had around 1M BTS for donation purposes. Delegate got elected for that so some funds could be used for this? Throughout the week we will probably have more people helping out and donating but if there's a lack I could double mine.And ffs hope no one forgets to record this, would be an epic fail You're right with the scripts, but could they be writen and could everyone work with them in those 3 weeks? With all the work being done I don't know if this is achievable. People might have other priorities now... although this is important too imo
I like to volunteer my time to drive this but we need more technical volunteers and donations. My guestimate is that the cost is much more than what is donated so far.
@nethyb - We need a bid, please. @kenCode @cube @sudo @phillyguy @emailtooaj @onceuponatime @godzirra @rnglab @bobmaloney @ccedk @liondani @tonyk @Akado Removing the WAN limitations could prove our scalability beyond 100K tps.. I'm looking forward to this AWS test. It would be great if we could all record it on video with our phones. Split screen CLI and xeroc's tps gauge. Other ideas welcome. This is 14 people so far willing to put in some BTS for this 100K tps LAN test. Anybody else?
Great initiative. Following some thoughts from previous thread (quote below), wouldn't be better to prioritize development status (and dev schedules) before coordinating a date with volunteers?Nice to see the pledge growing ( : Quote from: rnglab on September 07, 2015, 09:26:18 pmQuote from: puppies on September 07, 2015, 09:01:58 pmSo you guys are talking about setting up a private network of aws machines in an attempt to push 100k tps? I don't really see the utility, since it's not something we could currently do in the wild. I think we would be better served if we got a large amount of people to join the actual test net and see how high we could get the tps there. I see it more as an open demonstration of scalability. Sometime after the last code tweaks and dev's local stress tests may result in an auspicious, community driven final test, and also as the opening bells for Bitshares 2.0
Quote from: puppies on September 07, 2015, 09:01:58 pmSo you guys are talking about setting up a private network of aws machines in an attempt to push 100k tps? I don't really see the utility, since it's not something we could currently do in the wild. I think we would be better served if we got a large amount of people to join the actual test net and see how high we could get the tps there. I see it more as an open demonstration of scalability. Sometime after the last code tweaks and dev's local stress tests may result in an auspicious, community driven final test, and also as the opening bells for Bitshares 2.0
So you guys are talking about setting up a private network of aws machines in an attempt to push 100k tps? I don't really see the utility, since it's not something we could currently do in the wild. I think we would be better served if we got a large amount of people to join the actual test net and see how high we could get the tps there.
Quote from: tonyk on September 07, 2015, 10:52:08 pm1. I'm contributing with the following amount of BTS: 25002. I'm contributing with the following resources: N/A3. I'm aiming for the following amount of transactions per second: 300Kok tony that seems to be what you expect the total network to achieve so I edited the template. What I mean was how much tps can a tester do during the test. And could we even handle 300k?
1. I'm contributing with the following amount of BTS: 25002. I'm contributing with the following resources: N/A3. I'm aiming for the following amount of transactions per second: 300K
(old thread: bitsharestalk/index.php/topic,18299.0/all.html) Quote from: nethyb on September 06, 2015, 04:48:50 amI for one, would be happy to contribute some resources (donate some BTS) or spin up a number of AWS instances to help with load testing, and I'm sure many others in the community would also help if someone could develop the instructions or scripts to perform a private testnet load/tps test.Using AWS, you could effectively have a 10Gb private LAN with the ability to spin up a significant number of servers for a short period of time at reasonable cost using spot pricing.Here is an atricle on how someone has configured AWS instances to achieve 1 Million TPS For Just $1.68/Hour - http://highscalability.com/blog/2014/8/18/1-aerospike-server-x-1-amazon-ec2-instance-1-million-tps-for.htmlIf we chime and say we (as the community) would be willing to fund/support/provide resources for this test - it may encourage one us to take it on... @nethyb - We need a bid, please. @kenCode @cube @sudo @phillyguy @emailtooaj @onceuponatime @godzirra @rnglab @bobmaloney @ccedk @liondani @tonyk @Akado Removing the WAN limitations could prove our scalability beyond 100K tps.. I'm looking forward to this AWS test. It would be great if we could all record it on video with our phones. Split screen CLI and xeroc's tps gauge. Other ideas welcome. This is 14 people so far willing to put in some BTS for this 100K tps LAN test. Anybody else?
1. I'm contributing with the following amount of BTS:2. I'm contributing with the following resources (for testers only):3. I'm aiming to contribute following amount of transactions per second (for testers only):