Author Topic: October 2nd Test Network  (Read 28891 times)

0 Members and 1 Guest are viewing this topic.

Offline bytemaster

For the latest updates checkout my blog: http://bytemaster.bitshares.org
Anything said on these forums does not constitute an intent to create a legal obligation or contract between myself and anyone else.   These are merely my opinions and I reserve the right to change them at any time.

Offline Thom

If I participate in the TPS test (depends on when it happens) I will use:

6) Crown Cloud  4GB RAM, 2 CPU core, 3TB bandwidth,     40GB disk in Los Angeles, USA
Injustice anywhere is a threat to justice everywhere - MLK |  Verbaltech2 Witness Reports: https://bitsharestalk.org/index.php/topic,23902.0.html

Offline liondani

  • Hero Member
  • *****
  • Posts: 3737
  • Inch by inch, play by play
    • View Profile
    • My detailed info
  • BitShares: liondani
  • GitHub: liondani
I will use for the test network a VPS that is located in Germany:

               OS:    Ubuntu Linux  64bit   
            CPU:    Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz, 2 cores
           RAM:   6 GB
            SSD:   500 GB
bandwidth:   100 Mbit/s port

Offline Thom

VPS Specs:

Code: [Select]
The following 3  hosts are all Vultr VPS nodes, 1GB RAM, 1 CPU core, 2TB bandwidth, 20GB SSD.
They also run Debian 8.0 64 bit OS:

1) New Jersey  -- with DDoS protection
2) Amsterdam   -- with DDoS protection
3) Tokyo       -- No DDoS protection

These nodes all run Ubuntu 14.04, 64 bit OS:

4) Vultr        4GB RAM, 2 CPU core, 1.6 TB bandwidth,  90GB disk in Sydney Austrailia
5) Crown Cloud  2GB RAM, 2 CPU core, 3TB bandwidth,     30GB disk in Frankfurt Germany
6) Crown Cloud  4GB RAM, 2 CPU core, 3TB bandwidth,     40GB disk in Los Angeles, USA
7) Bithost      2GB RAM, 2 CPU core, 3TB bandwidth,     40GB disk in Singnapore
« Last Edit: October 05, 2015, 02:23:47 pm by Thom »
Injustice anywhere is a threat to justice everywhere - MLK |  Verbaltech2 Witness Reports: https://bitsharestalk.org/index.php/topic,23902.0.html

Offline clayop

  • Hero Member
  • *****
  • Posts: 2033
    • View Profile
    • Bitshares Korea
  • BitShares: clayop
Code: [Select]
2046483ms th_a       application.cpp:516           get_item             ] Couldn't find block 00008e220adc1561e0ceb4964000000000000000 -- corresponding ID in our chain is 00008e220adc1561e0ceb496e2fe61effc44196e
2046486ms th_a       application.cpp:432           handle_transaction   ] Got transaction from network
./run.sh: line 1:  8080 Segmentation fault      ./witness_node --genesis-json "oct-02-testnet-genesis.json" -d oct02 -w \""1.6.0"\" -w \""1.6.1"\" -w \""1.6.2"\" -w \""1.6.3"\" -w \""1.6.4"\" -w \""1.6.5"\" -w \""1.6.6"\" -w \""1.6.7"\" -w \""1.6.8"\" -w \""1.6.9"\" -w \""1.6.10"\" -w \""1.6.11"\" --replay-blockchain

This is what the init node said before it died during the flood.   We are looking into what could have caused it.

As far as release plans go,  we will protect the network from excessive flooding by rate limiting transaction throughput in the network code.   We recently made a change that allowed a peer to fetch more than one transaction at a time and that change is what allowed us to hit 1000+ TPS.   That change had the side effect of making the network vulnerable to flooding.       For the BTS2 network we will revert to only fetching 1 transaction at a time which will limit throughput of the network to under 100 TPS.  This will only be a limit in the P2P code which can be upgraded at any time without requiring a hard fork.   

If the network is generating anywhere near 100 TPS per second then the network will be earning more than $1M per day in fees and our market cap would be closer to Bitcoins market cap.     In other words, this should have 0 impact on customer experience over the next several months.   By the time we start gaining traction like that we will have worked through the kinks of getting a higher throughput network layer.

Is there any other way to cap tps? (E.G.GRAPHENE_DEFAULT_MAX_TRANSACTION_SIZE)
Bitshares Korea - http://www.bitshares.kr
Vote for me and see Korean Bitshares community grows
delegate-clayop

Offline liondani

  • Hero Member
  • *****
  • Posts: 3737
  • Inch by inch, play by play
    • View Profile
    • My detailed info
  • BitShares: liondani
  • GitHub: liondani
  For the BTS2 network we will revert to only fetching 1 transaction at a time which will limit throughput of the network to under 100 TPS.  This will only be a limit in the P2P code which can be upgraded at any time without requiring a hard fork.   

That means that we will not have any problem (performance wise) to continue with more than 17 witnesses the early days (we could continue to stick to 101)?
At least until we make the upgrade for more than 100 tps in future... 

Offline wuyanren

  • Hero Member
  • *****
  • Posts: 589
    • View Profile
Code: [Select]
2046483ms th_a       application.cpp:516           get_item             ] Couldn't find block 00008e220adc1561e0ceb4964000000000000000 -- corresponding ID in our chain is 00008e220adc1561e0ceb496e2fe61effc44196e
2046486ms th_a       application.cpp:432           handle_transaction   ] Got transaction from network
./run.sh: line 1:  8080 Segmentation fault      ./witness_node --genesis-json "oct-02-testnet-genesis.json" -d oct02 -w \""1.6.0"\" -w \""1.6.1"\" -w \""1.6.2"\" -w \""1.6.3"\" -w \""1.6.4"\" -w \""1.6.5"\" -w \""1.6.6"\" -w \""1.6.7"\" -w \""1.6.8"\" -w \""1.6.9"\" -w \""1.6.10"\" -w \""1.6.11"\" --replay-blockchain

This is what the init node said before it died during the flood.   We are looking into what could have caused it.

As far as release plans go,  we will protect the network from excessive flooding by rate limiting transaction throughput in the network code.   We recently made a change that allowed a peer to fetch more than one transaction at a time and that change is what allowed us to hit 1000+ TPS.   That change had the side effect of making the network vulnerable to flooding.       For the BTS2 network we will revert to only fetching 1 transaction at a time which will limit throughput of the network to under 100 TPS.  This will only be a limit in the P2P code which can be upgraded at any time without requiring a hard fork.   

If the network is generating anywhere near 100 TPS per second then the network will be earning more than $1M per day in fees and our market cap would be closer to Bitcoins market cap.     In other words, this should have 0 impact on customer experience over the next several months.   By the time we start gaining traction like that we will have worked through the kinks of getting a higher throughput network layer.
A few days did not see you, I miss you very much

Offline Thom

In the old client I used wallet_account_balance_ids to get the IDs, then used the first one in the list as an arg to blockchain_get_balance and confirmed it had the correct balance. That also provided the owner key for that balance, which I then used with wallet_dump_private_key to get the private key for that specific balance. All this in the 0.9.3c client as per xeroc's docs.

in the cli_wallet of graphene I created account delegate.verbaltech with import_key delegate.verbaltech <account private key>. I then tried to import the balance:

Code: [Select]
import_balance delegate.verbaltech [<balance private key from 0.9.3c>] true
2850496ms th_a       wallet.cpp:3138               import_balance       ] balances: []
10 assert_exception: Assert Exception
operations.size() > 0: A transaction must have at least one operation
    {"trx":{"ref_block_num":0,"ref_block_prefix":0,"expiration":"1970-01-01T00:00:00","operations":[],"extensions":[]}}
    th_a  transaction.cpp:51 validate

    {"name_or_id":"delegate.verbaltech"}
    th_a  wallet.cpp:3179 import_balance

There's always something in the way - URG!  What crazy little syntactical error causes this?

It seems the 02-oct snapshot has not been made on october 2nd .. I also miss many of my funds that I though would be there ..

Doh! No wonder I couldn't import the balance! I did the transfer on the 3rd so it wasn't in the snapshot. This would be an important point to make in your docs xeroc, that never occurred to me. Also, the error message is worthless as to the cause, if it is due to a missing value.

This also highlights the need for a brief explanation of just what the genesis.json file contains that each testnet requires. I asked but no replies given to whether the genesis.json file requires updating each testnet for the sole purpose of getting the latest balances from the 0.9.x chain, OR if there is some other reason. Seems like the same genesis.json could be used for multiple test iterations if it was just a snapshot.
« Last Edit: October 05, 2015, 12:56:11 pm by Thom »
Injustice anywhere is a threat to justice everywhere - MLK |  Verbaltech2 Witness Reports: https://bitsharestalk.org/index.php/topic,23902.0.html

Offline liondani

  • Hero Member
  • *****
  • Posts: 3737
  • Inch by inch, play by play
    • View Profile
    • My detailed info
  • BitShares: liondani
  • GitHub: liondani
If the network is generating anywhere near 100 TPS per second then the network will be earning more than $1M per day in fees and our market cap would be closer to Bitcoins market cap.     In other words, this should have 0 impact on customer experience over the next several months.   By the time we start gaining traction like that we will have worked through the kinks of getting a higher throughput network layer.


Offline bytemaster

Code: [Select]
2046483ms th_a       application.cpp:516           get_item             ] Couldn't find block 00008e220adc1561e0ceb4964000000000000000 -- corresponding ID in our chain is 00008e220adc1561e0ceb496e2fe61effc44196e
2046486ms th_a       application.cpp:432           handle_transaction   ] Got transaction from network
./run.sh: line 1:  8080 Segmentation fault      ./witness_node --genesis-json "oct-02-testnet-genesis.json" -d oct02 -w \""1.6.0"\" -w \""1.6.1"\" -w \""1.6.2"\" -w \""1.6.3"\" -w \""1.6.4"\" -w \""1.6.5"\" -w \""1.6.6"\" -w \""1.6.7"\" -w \""1.6.8"\" -w \""1.6.9"\" -w \""1.6.10"\" -w \""1.6.11"\" --replay-blockchain

This is what the init node said before it died during the flood.   We are looking into what could have caused it.

As far as release plans go,  we will protect the network from excessive flooding by rate limiting transaction throughput in the network code.   We recently made a change that allowed a peer to fetch more than one transaction at a time and that change is what allowed us to hit 1000+ TPS.   That change had the side effect of making the network vulnerable to flooding.       For the BTS2 network we will revert to only fetching 1 transaction at a time which will limit throughput of the network to under 100 TPS.  This will only be a limit in the P2P code which can be upgraded at any time without requiring a hard fork.   

If the network is generating anywhere near 100 TPS per second then the network will be earning more than $1M per day in fees and our market cap would be closer to Bitcoins market cap.     In other words, this should have 0 impact on customer experience over the next several months.   By the time we start gaining traction like that we will have worked through the kinks of getting a higher throughput network layer.
For the latest updates checkout my blog: http://bytemaster.bitshares.org
Anything said on these forums does not constitute an intent to create a legal obligation or contract between myself and anyone else.   These are merely my opinions and I reserve the right to change them at any time.

Offline xeroc

  • Board Moderator
  • Hero Member
  • *****
  • Posts: 12922
  • ChainSquad GmbH
    • View Profile
    • ChainSquad GmbH
  • BitShares: xeroc
  • GitHub: xeroc
In the old client I used wallet_account_balance_ids to get the IDs, then used the first one in the list as an arg to blockchain_get_balance and confirmed it had the correct balance. That also provided the owner key for that balance, which I then used with wallet_dump_private_key to get the private key for that specific balance. All this in the 0.9.3c client as per xeroc's docs.

in the cli_wallet of graphene I created account delegate.verbaltech with import_key delegate.verbaltech <account private key>. I then tried to import the balance:

Code: [Select]
import_balance delegate.verbaltech [<balance private key from 0.9.3c>] true
2850496ms th_a       wallet.cpp:3138               import_balance       ] balances: []
10 assert_exception: Assert Exception
operations.size() > 0: A transaction must have at least one operation
    {"trx":{"ref_block_num":0,"ref_block_prefix":0,"expiration":"1970-01-01T00:00:00","operations":[],"extensions":[]}}
    th_a  transaction.cpp:51 validate

    {"name_or_id":"delegate.verbaltech"}
    th_a  wallet.cpp:3179 import_balance

There's always something in the way - URG!  What crazy little syntactical error causes this?

It seems the 02-oct snapshot has not been made on october 2nd .. I also miss many of my funds that I though would be there ..

Offline clayop

  • Hero Member
  • *****
  • Posts: 2033
    • View Profile
    • Bitshares Korea
  • BitShares: clayop
delegate-clayop

1 CPU (Intel Ivy Bridge)
3.75 G memory
Bitshares Korea - http://www.bitshares.kr
Vote for me and see Korean Bitshares community grows
delegate-clayop

Offline mindphlux

  • Sr. Member
  • ****
  • Posts: 232
    • View Profile
My testwitness runs on a dedicated server with 8 cores and 16gb of ram.

Should be enough for high performance.
Please consider voting for my witness mindphlux.witness and my committee user mindphlux. I will not vote for changes that affect witness pay.

Offline clayop

  • Hero Member
  • *****
  • Posts: 2033
    • View Profile
    • Bitshares Korea
  • BitShares: clayop
I will double my specs for the next test and post the results of the stress test.

Great. Test witnesses may have to post their vps specs.
Bitshares Korea - http://www.bitshares.kr
Vote for me and see Korean Bitshares community grows
delegate-clayop

iHashFury

  • Guest
Vultr with a1GB RAM 1 CPU - only running min services and witness_node.

This suggests maybe 2 CPUs would be better for 1000tps as the CPU peaks at about 140%

I will double my specs for the next test and post the results of the stress test.

Any thoughts?

 
« Last Edit: October 05, 2015, 10:38:02 am by iHashFury »