BitShares Forum
Main => General Discussion => Topic started by: bytemaster on October 02, 2015, 01:17:03 pm
-
The last test network died due to a poorly coordinated and rushed hardfork. Mostly because I forgot to upgrade half the witnesses :*(
https://github.com/cryptonomex/graphene/releases
Lets try it again.
-
The last test network died due to a poorly coordinated and rushed hardfork. Mostly because I forgot to upgrade half the witnesses :*(
https://github.com/cryptonomex/graphene/releases
Lets try it again.
are you related with cagara? :P :D
-
Building ...
-
The last test network died due to a poorly coordinated and rushed hardfork. Mostly because I forgot to upgrade half the witnesses :*(
https://github.com/cryptonomex/graphene/releases
Lets try it again.
How to prevent similar situations occur, A person to maintain the network is not a good idea, It seems that 17 Witnesses are not safe enough.
-
delegate spartako (1.6.12) ready and set proxy 'dan':
get_witness spartako
{
"id": "1.6.12",
"witness_account": "1.2.72822",
"last_aslot": 0,
"signing_key": "GPH7qFryjVpfkndAW4siraK1dV4f8gGxRsgnry7ErPFRjirdwVJEG",
"vote_id": "1:32",
"total_votes": 0,
"url": "",
"total_missed": 0
}
-
Instructions updated (if anybody needs them now :) ) here : https://github.com/cryptonomex/graphene/wiki/How-to-setup-your-witness-for-test-net-(Ubuntu-14.04) (https://github.com/cryptonomex/graphene/wiki/How-to-setup-your-witness-for-test-net-(Ubuntu-14.04))
-
List your nick here is you want 5000 CORE for register the witness
-
dele-puppy please spartako
-
up and proxying to dan.
-
List your nick here is you want 5000 CORE for register the witness
riverhead
Thanks!
-
delegate-clayop is ready and set voting proxy to 'dan'
get_witness delegate-clayop
{
"id": "1.6.15",
"witness_account": "1.2.22404",
"last_aslot": 0,
"signing_key": "GPH6ESZ9926WQiJ7K2FLPetxd6MVhzH8bgj7LX7zmYH3D7XXxc3VE",
"vote_id": "1:35",
"total_votes": 0,
"url": "http://www.bitshares.kr/",
"total_missed": 0
}
-
List your nick here is you want 5000 CORE for register the witness
Please send 5000 CORE to bitcube for registration. Thank you!
-
List your nick here is you want 5000 CORE for register the witness
Please send 5000 CORE to xeldal. Thanks!!
-
@spartako
Thank you!
witness bitcube up and ready for voting. Please vote for bitcube.
get_witness bitcube
{
"id": "1.6.16",
"witness_account": "1.2.8206",
-
sent to riverhead, cube, and xeldal
unlocked >>> transfer betaxtrade bitcube 10000 CORE "" true
transfer betaxtrade bitcube 10000 CORE "" true
{
"ref_block_num": 1299,
"ref_block_prefix": 1597594972,
"expiration": "2015-10-02T14:10:42",
"operations": [[
0,{
"fee": {
"amount": 2000000,
"asset_id": "1.3.0"
},
"from": "1.2.7109",
"to": "1.2.8206",
"amount": {
"amount": 1000000000,
"asset_id": "1.3.0"
},
"extensions": []
}
]
],
"extensions": [],
"signatures": [
"203ed7b4ecc9390bb7e2ed949287baa34a8347b5dbd3af12cef271f518c0750d9107bfa3d39f1b8decebd1b91263db5bce275509457e6c36e94afce3ec1b860d91"
]
}
unlocked >>> transfer betaxtrade xeldal 10000 CORE "" true
transfer betaxtrade xeldal 10000 CORE "" true
{
"ref_block_num": 1304,
"ref_block_prefix": 1188092901,
"expiration": "2015-10-02T14:10:57",
"operations": [[
0,{
"fee": {
"amount": 2000000,
"asset_id": "1.3.0"
},
"from": "1.2.7109",
"to": "1.2.86459",
"amount": {
"amount": 1000000000,
"asset_id": "1.3.0"
},
"extensions": []
}
]
],
"extensions": [],
"signatures": [
"1f0d4c5787e586fbbc2e09f93ada861b1f73aff152a11405417d8fadbded1ebaaf7ddabf339109e991cbfc61e10e052b25608b450ae12300292f3d206071d6bcd9"
]
-
wackou up and running, proxy set to dan
get_witness wackou
{
"id": "1.6.17",
"witness_account": "1.2.83349",
"last_aslot": 0,
"signing_key": "GPH8C1Cz3LDu732VT74bYvNE2G25NLghV96zcMnFwLd4Z6aXWup9i",
"vote_id": "1:37",
"total_votes": 0,
"url": "http://digitalgaia.io",
"total_missed": 0
}
-
delegate-1.lafona
Thanks spartako
-
get_witness jtm1
{
"id": "1.6.14",
"witness_account": "1.2.91956",
up ,running and signing blocks with zero votes now
-
sent to riverhead, cube, and xeldal
Thanks betax!
-
http://stats.bitshares.eu/ is up to date
-
off to dentist witness 'delegate.ihasfury' is ready for votes
proxy set to 'dan'
get_witness delegate.ihashfury
{
"id": "1.6.18",
"witness_account": "1.2.22473",
"last_aslot": 0,
"signing_key": "GPH8UU5H8z2cEFkRr6gJrP4EBTFLAnmbTjAMXesCbn4Bpm7R33beJ",
"vote_id": "1:38",
"total_votes": 0,
"url": "http://bit.ly/ihashfury",
"total_missed": 0
}
-
delegate-1.lafona is up and ready for votes. Thanks spartako for the CORE.
proxy set to 'dan'
get_witness delegate-1.lafona
{
"id": "1.6.19",
"witness_account": "1.2.22396",
"last_aslot": 0,
"signing_key": "GPH5DCL5nbhL13sXBh1mwQp5pUBSw7rmwjWeiiy5b2Z2UxuYf8spU",
"vote_id": "1:39",
"total_votes": 0,
"url": ""
}
-
Hello good people,
Sorry to join the testing so late, I've been spending the last few days catching up on the progress so far.
I will do my best to get a witness up and running this evening.
spartako, can I please haz some CORE to 'bitspace-testaccount1' for the occasion?
Spectral
-
Thanks spartako
dele-puppy is up and ready for votes.
-
roadscape (1.6.24) is up and ready for votes
get_witness roadscape
{
"id": "1.6.24",
"witness_account": "1.2.67429",
-
Requesting some CORE for registration. My nodes are up, but need to createe_witness
Thanks in advance.
Edit: Thanks sender.
-
Requesting some CORE for registration. My nodes are up, but need to createe_witness
Thanks in advance.
Edit: Thanks sender.
Account name?
-
I've imported priv key for xeldal
the balance is 15000 CORE
I get this error when trying to create_witness
create_witness xeldal "" true
10 assert_exception: Assert Exception
it != _keys.end():
{}
th_a wallet.cpp:604 get_private_key
{"owner_account":"xeldal","broadcast":true}
th_a wallet.cpp:1378 create_witness
The only thing I did different was prior to create_witness I did suggest_brain_key as I needed this last time to update_witness with new key.
-
It seems the network code is much better than previous testnet (1624 tx in block: 541 tx/sec)
https://graphene.bitshares.org/#/block/2577
(http://i.imgur.com/z0TauGY.png)
-
It seems the network code is much better than previous testnet (1624 tx in block: 541 tx/sec)
https://graphene.bitshares.org/#/block/2577
(http://i.imgur.com/z0TauGY.png)
Wow you already run spamming? :D I will join soon after the PPA is updated.
-
fox and three btsnow witnesses are missing blocks. I hope btsnow has separate vps for each witness.
-
Witness "pmc" 1.6.26 up and running. Please vote.
-
It seems the network code is much better than previous testnet (1624 tx in block: 541 tx/sec)
https://graphene.bitshares.org/#/block/2577
I found that block interval is 6 second for that block. So... 270 tps :(
-
Wow you already run spamming? :D I will join soon after the PPA is updated.
A small test, I don't want spend all my money the first day :P
-
Im unable to create witness any help would be appreciated.
create_witness xeldal "" true
10 assert_exception: Assert Exception
it != _keys.end():
{}
th_a wallet.cpp:604 get_private_key
{"owner_account":"xeldal","broadcast":true}
th_a wallet.cpp:1378 create_witness
-
It seems the network code is much better than previous testnet (1624 tx in block: 541 tx/sec)
https://graphene.bitshares.org/#/block/2577
I found that block interval is 6 second for that block. So... 270 tps :(
Yes it is true, some delegates lost is turn and the block interval was 6 second
-
Im unable to create witness any help would be appreciated.
create_witness xeldal "" true
10 assert_exception: Assert Exception
it != _keys.end():
{}
th_a wallet.cpp:604 get_private_key
{"owner_account":"xeldal","broadcast":true}
th_a wallet.cpp:1378 create_witness
Use dump_private_keys to make sure your wallet has them. May have to import them again.
-
riverhead is updated and ready for votes. 1.6.27
-
Im unable to create witness any help would be appreciated.
create_witness xeldal "" true
10 assert_exception: Assert Exception
it != _keys.end():
{}
th_a wallet.cpp:604 get_private_key
{"owner_account":"xeldal","broadcast":true}
th_a wallet.cpp:1378 create_witness
Use dump_private_keys to make sure your wallet has them. May have to import them again.
I deleted my wallet and started over
imported key
create_witness gives same error
dump keys lists 2 keys
GPH6....
5K6....
5k6 is the one i imported.
-
elmato up to date and ready for votes
get_witness elmato
{
"id": "1.6.28",
"witness_account": "1.2.26510",
"last_aslot": 0,
"signing_key": "GPH7TrF5cDX66egLWPLJ7YCnZD5vHeFTgNbPChV8MDQnVKMzDfPrK",
"vote_id": "1:48",
"total_votes": 0,
"url": "http://about:blank",
"total_missed": 0
}
-
Nevermind, I got it. the owner key i imported was not the active key... after dumping .9 active key and importing create_witness worked.
-
I love how fast is the response to the ui, had to make a video ;) https://www.youtube.com/watch?v=Sa9Zmt8aVMM&feature=youtu.be
-
xeldal is up 1.6.29
Is there any way to verify that the keys I'm using are going to work?
is suggest_brain_key and update_witness always necessary?
If you have more than one account in your wallet, what account do the keys returned from suggest_brain_key refer to? and will they work if you have more than one account in wallet?
-
It seems the network code is much better than previous testnet (1624 tx in block: 541 tx/sec)
https://graphene.bitshares.org/#/block/2577
I found that block interval is 6 second for that block. So... 270 tps :(
Yes it is true, some delegates lost is turn and the block interval was 6 second
1296 tx in block 3435 and it is 3s block -> 432 tps
https://graphene.bitshares.org/#/block/3435
-
I love how fast is the response to the ui, had to make a video ;) https://www.youtube.com/watch?v=Sa9Zmt8aVMM&feature=youtu.be
yeah +5%-- small spelling mistake on video title
Bitshares 2.0 Tes t net UI
-
Should we start publishing price feeds?
I want to test again the forced settlement :)
Where should i look for the update feed script (@xeroc) ?
-
Where should i look for the update feed script (@xeroc) ?
https://github.com/xeroc/python-graphenelib
-
List your nick here is you want 5000 CORE for register the witness
calabiyau - thank you !
edit:
"id": "1.6.30",
"witness_account": "1.2.14634",
still need some CORE
required 20.05566 CORE
to set voting proxy to 'dan'
thank you.
-
I accidentally voted everyone out :( I forgot to publish changes. You all will be back in 55 minutes.
-
I accidentally voted everyone out :( I forgot to publish changes. You all will be back in 55 minutes.
aka 'voter apathy test' ;)
-
in.abit is ready.
get_witness in.abit
{
"id": "1.6.31",
"witness_account": "1.2.38993",
"last_aslot": 0,
"signing_key": "GPH65XNUxWdYGqGyW9NtXdRpNntumLYT1cJ7CNE7F78Pwxrnx6cbV",
"vote_id": "1:51",
"total_votes": 0,
"url": "https://github.com/abitmore",
"total_missed": 0
}
-
Sadly there is need for a hardfork for this testnet... this time I promise to upgrade the init nodes.
The issue discovered is that the witness count code was setting a threshold equal to 50% of all stake rather than 50% of voting stake so every witness got voted in unless over 50% of the stake voted for less than everyone. In other words, the reverse case from the prior hardfork.
I set the fork time for 24 hours from now.
-
info
{
"head_block_num": 8144,
"head_block_id": "00001fd007dda4aa0252b904036549e43452eba8",
"head_block_age": "23 seconds old",
"next_maintenance_time": "27 minutes in the future",
"chain_id": "c746b258deb5e476601488d8dbb98cf6dcacc2dec857fda58514907257d461c3",
"participation": "98.43750000000000000",
What's wrong?
-
Sadly there is need for a hardfork for this testnet... this time I promise to upgrade the init nodes.
The issue discovered is that the witness count code was setting a threshold equal to 50% of all stake rather than 50% of voting stake so every witness got voted in unless over 50% of the stake voted for less than everyone. In other words, the reverse case from the prior hardfork.
I set the fork time for 24 hours from now.
OK. Building.
in.abit updated.
-
Sadly there is need for a hardfork for this testnet... this time I promise to upgrade the init nodes.
The issue discovered is that the witness count code was setting a threshold equal to 50% of all stake rather than 50% of voting stake so every witness got voted in unless over 50% of the stake voted for less than everyone. In other words, the reverse case from the prior hardfork.
I set the fork time for 24 hours from now.
Is this why my witness was expected to produce blocks as soon as create_witness <accountname> "URL" true completed? I was quite surprised to have 0 votes and missing blocks.
Rebuilding...
-
Sadly there is need for a hardfork for this testnet... this time I promise to upgrade the init nodes.
The issue discovered is that the witness count code was setting a threshold equal to 50% of all stake rather than 50% of voting stake so every witness got voted in unless over 50% of the stake voted for less than everyone. In other words, the reverse case from the prior hardfork.
I set the fork time for 24 hours from now.
spartako updated to last master
-
Sadly there is need for a hardfork for this testnet... this time I promise to upgrade the init nodes.
The issue discovered is that the witness count code was setting a threshold equal to 50% of all stake rather than 50% of voting stake so every witness got voted in unless over 50% of the stake voted for less than everyone. In other words, the reverse case from the prior hardfork.
I set the fork time for 24 hours from now.
Is this why my witness was expected to produce blocks as soon as create_witness <accountname> "URL" true completed? I was quite surprised to have 0 votes and missing blocks.
Rebuilding...
More or less.
-
I had to take down init node to compile because while it was running I didn't have enough RAM :(
-
"Normal" price feeding activated.
-
I had to take down init node to compile because while it was running I didn't have enough RAM :(
All init nodes are running in one VPS? Number of init nodes are more than 1/3 of all witnesses.. Will it kill the network?
-
"Normal" price feeding activated.
Can you provide "step-by-step" instruction for us?
-
delegate-clayop updated
-
jtm1 updated but missing votes
-
When I try to run the witness I get
witness_node: /usr/lib/x86_64-linux-gnu/libstdc++.so.6: version `GLIBCXX_3.4.20' not found
Using ubuntu 14.04. I have done the update/upgrade/dist-upgrate.
A google search shows a number of others also have this issue.
What package am I missing? Do I seed the ppa:ubuntu-toolchain-r/test PPA?
I tried the PPA no luck. I resolved it with:
apt-get install libstdc++6-4.7-dev
and then...
apt-get install libstdc++6
-
"Normal" price feeding activated.
Can you provide "step-by-step" instruction for us?
I'm running with a modified version of xeroc's price feeding script, nothing else special.
Xeroc's: https://github.com/xeroc/python-graphenelib
Mine: https://github.com/abitmore/python-graphenelib
I had made some efforts to get it running, but don't have time to re-do it. I'll try to remember and post.
* Env: Ubuntu 14.04 LTS x86_64, python3 is pre-installed
* sudo apt-get install python3-numpy python3-prettytable python3-setuptools
* Install AutobahnPython
** git clone https://github.com/tavendo/AutobahnPython.git
** cd AutobahnPython
** git checkout v0.9.6
** cd autobahn
** python3 setup.py install
* git clone https://github.com/abitmore/python-graphenelib.git
** cd python-graphenelib
** python3 setup.py install
** cd scripts
** cp config-example.py config.py
** Edit config.py
** python3 pricefeeds.py
-
spartako, thank you for the CORE :)
I imported account 'bitspace-testaccount1', I think it is working properly, and it is funded.
How much CORE is actually needed to get a witness up and running? It would seem I need 10k to become lifetime member first. After that?
itr->get_balance() >= -delta: Insufficient Balance: bitspace-testaccount1's balance of 5044.50000 CORE is less than required 10000 CORE
{"a":"bitspace-testaccount1","b":"5044.50000 CORE","r":"10000 CORE"}
Also I feel I need to make a comment to everyone contributing since the early times: Great work on treading the ground and on the documentation! The whole procedure is much easier to follow now than it was in the beginning. +5% +5% +5%
-
Latested witness node and GUI now properly indicate how long you should wait for 100% irreversibility.
-
wackou updated to latest master, still waiting for some votes to start producing blocks, thanks!
wackou up and running, proxy set to dan
get_witness wackou
{
"id": "1.6.17",
"witness_account": "1.2.83349",
"last_aslot": 0,
"signing_key": "GPH8C1Cz3LDu732VT74bYvNE2G25NLghV96zcMnFwLd4Z6aXWup9i",
"vote_id": "1:37",
"total_votes": 0,
"url": "http://digitalgaia.io",
"total_missed": 0
}
-
I have a seed node up at 128.199.78.89 for test5. Ran out of time to replicate on the other systems.
I tried to get my witness going, I think it would have been good to go but I'll have to brush up on getting a wallet, necessary balance, voted into place etc.
I have a meeting to attend so I have to get going. I'll try to catch up and get a block producing witness tomorrow.
Have a great evening everyone!
-
"Normal" price feeding activated.
Can you provide "step-by-step" instruction for us?
I'm running with a modified version of xeroc's price feeding script, nothing else special.
Xeroc's: https://github.com/xeroc/python-graphenelib
Mine: https://github.com/abitmore/python-graphenelib
I had made some efforts to get it running, but don't have time to re-do it. I'll try to remember and post.
* Env: Ubuntu 14.04 LTS x86_64, python3 is pre-installed
* sudo apt-get install python3-numpy python3-prettytable python3-setuptools
* Install AutobahnPython
** git clone https://github.com/tavendo/AutobahnPython.git
** cd AutobahnPython
** git checkout v0.9.6
** cd autobahn
** python3 setup.py install
* git clone https://github.com/abitmore/python-graphenelib.git
** cd python-graphenelib
** python3 setup.py install
** cd scripts
** cp config-example.py config.py
** Edit config.py
** python3 pricefeeds.py
it is possible to install autobahn by
apt-get install python3-pip
pip3 install autobahn
pricefeeds.py was crashing by division by zero some hours ago, but now it is working fine.
publishing pricefeeds now
-
elmato updated to master (93a108487d6499bb84ab2c8815ae463055e9d767)
get_witness elmato
{
"id": "1.6.28",
"witness_account": "1.2.26510",
"last_aslot": 27561,
"signing_key": "GPH7TrF5cDX66egLWPLJ7YCnZD5vHeFTgNbPChV8MDQnVKMzDfPrK",
"pay_vb": "1.13.52",
"vote_id": "1:48",
"total_votes": "155155361606",
"url": "http://about:blank",
"total_missed": 70,
"last_confirmed_block_num": 11003
}
-
Witnesses are now updated. Please add votes for fox.
Thanks
-
Waiting for votes. Thanks
-
Do I need to import a certain type of account to become a lifetime member, like a delegate account or a short name account? I am wary of dumping owner keys in clear text of a delegate account or an account with larger funds... should I be doing that?
I'm stuck with 5000 CORE, and can't upgrade. Account: bitspace-testaccount1
-
mr.agsexplorer witness is updated and online, please vote it in
-
Can I have some CORE for spamming? Thanks in advance.
Graphene ID: clayop
-
xeldal updated to master
-
is there a blockexplorer ready for bts2.0?
-
I could also use some CORE for spamming.
-
bitshares-argentina is ready on master, can I have 5k CORE?
-
I need 1mil CORE for spamming :)
-
After producing blocks, witness down this morning.
1908602ms th_a application.cpp:388 handle_block ] Got block #12840 with time 2015-10-03T00:31:48 from network with latency of 604 ms from roadscape
1908673ms th_a application.cpp:432 handle_transaction ] Got transaction from network
1908722ms th_a application.cpp:432 handle_transaction ] Got transaction from network
1908774ms th_a application.cpp:432 handle_transaction ] Got transaction from network
1908816ms th_a application.cpp:432 handle_transaction ] Got transaction from network
1908874ms th_a application.cpp:432 handle_transaction ] Got transaction from network
1908959ms th_a application.cpp:432 handle_transaction ] Got transaction from network
1909007ms th_a application.cpp:432 handle_transaction ] Got transaction from network
1909010ms th_a application.cpp:432 handle_transaction ] Got transaction from network
1909013ms th_a application.cpp:432 handle_transaction ] Got transaction from network
1909015ms th_a application.cpp:432 handle_transaction ] Got transaction from network
1909017ms th_a application.cpp:432 handle_transaction ] Got transaction from network
1909022ms th_a application.cpp:432 handle_transaction ] Got transaction from network
1909031ms th_a application.cpp:432 handle_transaction ] Got transaction from network
1909033ms th_a application.cpp:432 handle_transaction ] Got transaction from network
1909035ms th_a application.cpp:432 handle_transaction ] Got transaction from network
1909037ms th_a application.cpp:432 handle_transaction ] Got transaction from network
1909039ms th_a application.cpp:432 handle_transaction ] Got transaction from network
1909044ms th_a application.cpp:432 handle_transaction ] Got transaction from network
1909203ms th_a application.cpp:432 handle_transaction ] Got transaction from network
1909285ms th_a application.cpp:432 handle_transaction ] Got transaction from network
1909338ms th_a application.cpp:432 handle_transaction ] Got transaction from network
1909414ms th_a application.cpp:432 handle_transaction ] Got transaction from network
1909436ms th_a application.cpp:432 handle_transaction ] Got transaction from network
1909439ms th_a application.cpp:432 handle_transaction ] Got transaction from network
1909443ms th_a application.cpp:432 handle_transaction ] Got transaction from network
1909446ms th_a application.cpp:432 handle_transaction ] Got transaction from network
1909449ms th_a application.cpp:432 handle_transaction ] Got transaction from network
1909452ms th_a application.cpp:432 handle_transaction ] Got transaction from network
1909454ms th_a application.cpp:432 handle_transaction ] Got transaction from network
1909456ms th_a application.cpp:432 handle_transaction ] Got transaction from network
1909540ms th_a application.cpp:432 handle_transaction ] Got transaction from network
1909552ms th_a application.cpp:432 handle_transaction ] Got transaction from network
1909559ms th_a application.cpp:432 handle_transaction ] Got transaction from network
1909566ms th_a application.cpp:432 handle_transaction ] Got transaction from network
1909571ms th_a application.cpp:432 handle_transaction ] Got transaction from network
1909574ms th_a application.cpp:432 handle_transaction ] Got transaction from network
1909621ms th_a application.cpp:432 handle_transaction ] Got transaction from network
1909651ms th_a application.cpp:432 handle_transaction ] Got transaction from network
1909653ms th_a application.cpp:432 handle_transaction ] Got transaction from network
1909660ms th_a application.cpp:432 handle_transaction ] Got transaction from network
1909664ms th_a application.cpp:432 handle_transaction ] Got transaction from network
1909671ms th_a application.cpp:432 handle_transaction ] Got transaction from network
1909674ms th_a application.cpp:432 handle_transaction ] Got transaction from network
1909677ms th_a application.cpp:432 handle_transaction ] Got transaction from network
1909702ms th_a application.cpp:432 handle_transaction ] Got transaction from network
1909734ms th_a application.cpp:432 handle_transaction ] Got transaction from network
1909765ms th_a application.cpp:432 handle_transaction ] Got transaction from network
1909768ms th_a application.cpp:432 handle_transaction ] Got transaction from network
1909771ms th_a application.cpp:432 handle_transaction ] Got transaction from network
1909774ms th_a application.cpp:432 handle_transaction ] Got transaction from network
1909806ms th_a application.cpp:432 handle_transaction ] Got transaction from network
1909809ms th_a application.cpp:432 handle_transaction ] Got transaction from network
1909812ms th_a application.cpp:432 handle_transaction ] Got transaction from network
1909819ms th_a application.cpp:432 handle_transaction ] Got transaction from network
1909852ms th_a application.cpp:432 handle_transaction ] Got transaction from network
1909855ms th_a application.cpp:432 handle_transaction ] Got transaction from network
1909899ms th_a application.cpp:432 handle_transaction ] Got transaction from network
1909964ms th_a application.cpp:432 handle_transaction ] Got transaction from network
1910104ms th_a application.cpp:432 handle_transaction ] Got transaction from network
witness_node: /home/calabiyau/graphene/libraries/fc/src/thread/thread_d.hpp:370: bool fc::thread_d::start_next_fiber(bool): Assertion `std::current_exception() == std::exception_ptr()' failed.
Aborted (core dumped)
rebuilding....
-
My observer node seems to be stalled:
2278085ms th_a fork_database.cpp:57 push_block ] Pushing block to fork database that failed to link: 000042dd3748a501c13dee0a0c98aff2adab3bc6, 17117
2278085ms th_a fork_database.cpp:58 push_block ] Head: 17100, 000042cc1ae61f87f5417f66d42218140711fa97
2278086ms th_a application.cpp:416 handle_block ] Error when pushing block:
3080000 unlinkable_block_exception: unlinkable block
block does not link to known chain
{}
th_a fork_database.cpp:79 _push_block
{"new_block":{"previous":"000042dc84a2770bec7112325bef03929676a466","timestamp":"2015-10-03T04:21:36","witness":"1.6.2","transaction_merkle_root":"0000000000000000000000000000000000000000","extensions":[],"witness_signature":"1f3a3c7837024faadcac1f00e38147e424cc1b28dba94c9aee5aacffc94196f62056fc51c7ac435762b2eac0a271ab54c350853e1f061e831afc10186c6f25b294","transactions":[]}}
th_a db_block.cpp:195 _push_block
2306158ms th_a application.cpp:699 get_blockchain_synop ] synopsis: ["000042401b2d832712476acd65d286b9cd63e892","00004287828414dd010154dad121998d27f079d7","000042aa00cd6e6868673ab8b25e53aa6550a28b","000042bcaaf7dd2e4ec006534c6a4ade447d7e9d","000042c5e709fdaf2de9875c073b46586e57774e","000042c92bfcfc5b20e1f8e68ddfe8d79e44adc8","000042cbec1178af47476035878ff4410e59dcc8","000042cc1ae61f87f5417f66d42218140711fa97"]
-
I have entered in a fork 2 hours ago, UI reporting missing 82 blocks. Restarting...
ihashfury, riverhead, spartako, roadscape and pmc.. seems to have had the same problem.
-
Yeah, wiped out my chain directory and restarted the node and now I'm back in sync.
-
bitcube updated to master.
There is 70% participation.
"head_block_num": 19114,
"head_block_id": "00004aaa32c1144c6fec270f046732efda78f63c",
"head_block_age": "3 seconds old",
"next_maintenance_time": "5 minutes in the future",
"chain_id": "c746b258deb5e476601488d8dbb98cf6dcacc2dec857fda58514907257d461c3",
"participation": "70.31250000000000000",
-
My witness stalled in a fork:
{
"head_block_num": 17100,
"head_block_id": "000042cc1ae61f87f5417f66d42218140711fa97",
"head_block_age": "3 hours old",
"next_maintenance_time": "2 hours ago",
"chain_id": "c746b258deb5e476601488d8dbb98cf6dcacc2dec857fda58514907257d461c3",
"participation": "34.37500000000000000",
Resynced and now it is in the main fork:
{
"head_block_num": 19235,
"head_block_id": "00004b23cbfff3788a4bda57e312c453c5e31626",
"head_block_age": "3 seconds old",
"next_maintenance_time": "57 minutes in the future",
"chain_id": "c746b258deb5e476601488d8dbb98cf6dcacc2dec857fda58514907257d461c3",
"participation": "71.87500000000000000",
"active_witnesses": [
-
Started price feed with xeroc's script. The script was having 'divison by zero' error before I slept but it is working now.
-
back on track
"head_block_num": 19314,
"head_block_id": "00004b7263c523b80ea7467f5c080998de40d4dc",
"head_block_age": "3 seconds old",
-
My dedicated machine crashed this night and too most domains offline including stats.bitshares.eu. I will investigate and put them back online later this day (including the witness node)
-
Hi there, still need some CORE for bitshares-argentina to create_witness
{
"id": "2.1.0",
"head_block_number": 20005,
"head_block_id": "00004e254509b96a5c08d4839a4b19a811759d59",
"time": "2015-10-03T07:52:30",
"current_witness": "1.6.29",
"next_maintenance_time": "2015-10-03T08:00:00",
"witness_budget": 387000000,
"accounts_registered_this_interval": 0,
"recently_missed_count": 0,
"current_aslot": 38193,
"recent_slots_filled": "317659448096336101568232559415746362303",
"dynamic_flags": 0
}
-
Updated and producing blocks again
-
Sorry about the missed blocks
witness 'delegate.ihashfury' and nodes updated with xeroc's feed script running every hour
-
Sorry about the missed blocks
witness 'delegate.ihashfury' and nodes updated with xeroc's feed script running every hour
We all entered a fork, don't worry.
-
Hello,
I recompiled master and re-setup my witness. I was participating in testnet3,4 and now 5. I already was a witness in testnet 4 :-)
please vote for my witness:
get_witness mindphlux.witness
{
"id": "1.6.33",
"witness_account": "1.2.92028",
"last_aslot": 0,
"signing_key": "GPH63kshJb47VhEYyJJiWBWKYjPQuQ7bBm2UvTTYhcme2nvsMWCj7",
"vote_id": "1:54",
"total_votes": 0,
"url": "true",
"total_missed": 0,
"last_confirmed_block_num": 0
}
Thanks!
-
1000 TPS :)
https://graphene.bitshares.org/#/block/26232
-
Voting bias detected: witnesses with the digit 5 in their ID are not in active witness group:
info
{
...
"active_witnesses": [
"1.6.2",
...
"1.6.14",
"1.6.16",
...
"1.6.24",
"1.6.26",
...
"1.6.31",
...
],
Who are these digit 5 witnesses?
get_witness fox
{
"id": "1.6.25",
"witness_account": "1.2.30566",
"last_aslot": 23934,
...
"total_votes": "9178845900",
...
"last_confirmed_block_num": 7460
}
get_witness delegate-clayop
{
"id": "1.6.15",
"witness_account": "1.2.22404",
"last_aslot": 23969,
...
"total_votes": "9178845900",
...
"last_confirmed_block_num": 7490
}
Both witnesses have votes, have produced in the past, but neither are in the active witness group at this time. I recall Bytemaster voted out the witness lot, then stated he would vote them all back in, so maybe these were just missed and/or not meeting the minimum vote threshold (I'm not sure how to determine the threshold).
Thanks for looking into this.
Disclaimer: I am one of the digit 5 witnesses
-
Voting bias detected: witnesses with the digit 5 in their ID are not in active witness group:
info
{
...
"active_witnesses": [
"1.6.2",
...
"1.6.14",
"1.6.16",
...
"1.6.24",
"1.6.26",
...
"1.6.31",
...
],
Who are these digit 5 witnesses?
get_witness fox
{
"id": "1.6.25",
"witness_account": "1.2.30566",
"last_aslot": 23934,
...
"total_votes": "9178845900",
...
"last_confirmed_block_num": 7460
}
get_witness delegate-clayop
{
"id": "1.6.15",
"witness_account": "1.2.22404",
"last_aslot": 23969,
...
"total_votes": "9178845900",
...
"last_confirmed_block_num": 7490
}
Both witnesses have votes, have produced in the past, but neither are in the active witness group at this time. I recall Bytemaster voted out the witness lot, then stated he would vote them all back in, so maybe these were just missed and/or not meeting the minimum vote threshold (I'm not sure how to determine the threshold).
Thanks for looking into this.
Disclaimer: I am one of the digit 5 witnesses
+5%
Disclaimer: I am another digit 5 witness :D
-
I am one of those witnessed id...Is there something wrong?
A question:
I set up and ran a witness in the testnet of the 1st october (sadly with latency problems).
Now I would like to put this witness in another vps... Do I need to re-create the wallet, re-import the private keys and so on, or there is a faster way?
I would also know how much does it cost to become a delegate now (so upgrade to lifetime) and how much will it cost once BTS 2.0 is out.
Thanks and sorry for the little OT :)
Edit: I think I misunderstood the 5 digit id thing, nevermind :) I was in the active ones
-
Thank you spartako for the CORES, bitshares-argentina ready to be voted in
-
2079000ms th_a witness.cpp:182 block_production_loo ] Not producing block because it isn't my turn
2079004ms th_a application.cpp:432 handle_transaction ] Got transaction from network
2079264ms th_a application.cpp:432 handle_transaction ] Got transaction from network
witness_node: /home/calabiyau/graphene/libraries/fc/src/thread/thread_d.hpp:370: bool fc::thread_d::start_next_fiber(bool): Assertion `std::current_exception() == std::exception_ptr()' failed.
Aborted (core dumped)
witness crashed - restart
-
My memory isn't what it used to be. I have the code built and I can launch as a seed, but I can't recall the minimal steps required to become an active witness that produces blocks on graphene.
I have the keys for delegate.verbaltech I used for the early testnet, and a config.ini file I've edited with current info from https://github.com/cryptonomex/graphene/releases/tag/test5. I presume they're still valid now as they were for the early testnet (Is that dictated by the genesis.json file? Why does that need to change each test cycle, is it so that the current state of the BTS 1.0 chain serves as the starting point?).
The witness_id is where I get lost.
Is obtaining a new witness_id for delegate.verbaltech a matter of importing the keys using import_key <accountname> <owner wif key> and requesting to be voted in? Can the wallet .json file exported from 0.9.3c be imported into the graphene cli_wallet? I tried using import_accounts "filename.json" "password" but it fails with "fc::exists( filename ):", tho I tried both full pathname and putting the file where the wallets & config.ini are.
I see several posts requesting CORE to get registered as a witness and for spamming. The spamming I can understand, it takes assets to run tests. For registration though? Don't all 1.0 delegate accounts exist in 2.0 through the genesis block? If people are requesting CORE to get registered is that to avoid the need to import a balance from 1.0?
This wiki page (https://github.com/cryptonomex/graphene/wiki/Howto-import-an-existing-delegate-as-witness-in-BitShares-2.0) says to use "get_witness <delegatename>" to obtain the witnessID from the cli wallet. When I do that it says "No account or witness named delegate.verbaltech"
-
@Thom
For this test net you will need to upgrade a lifetime member account to a witness with the create_witness command.
This will give you a new public private signing key to put in your config.ini.
After create_ witness get_witness well give you your witness ID number.
-
I have the code built and I can launch as a seed, but I can't recall the minimal steps required to become an active witness that produces blocks on graphene.
https://github.com/cryptonomex/graphene/wiki/How%20to%20become%20an%20active%20witness%20in%20BitShares%202.0
-
I have the code built and I can launch as a seed, but I can't recall the minimal steps required to become an active witness that produces blocks on graphene.
https://github.com/cryptonomex/graphene/wiki/How%20to%20become%20an%20active%20witness%20in%20BitShares%202.0
Hey pc I appreciate the sentiment for helping, but that isn't very helpful. I've seen that, reviewed that and all that comes from that is more questions which that docs page and you didn't really address.
Puppies reply is a totally different set of steps, also not mentioned in that docs page.
Puppies, I tried to use the cmd you mentions (create_witness(string owner_account, string url, bool broadcast)), but don't know how to specify the args. Could you be more specific? It sounds like an account with lifetime membership is prerequisite. How do I create the "owner account"? Is that where pc's reference comes into play?
If so, can anyone answer the question on if the import can utilize the 0.9.3 json export, and if not why not? Seems like that 1.0 -> 2.0 import should be high on the testing priorities at this stage, not to mention how much easier that would be to the process described in that doc xeroc wrote when testing first started.
-
I have the code built and I can launch as a seed, but I can't recall the minimal steps required to become an active witness that produces blocks on graphene.
https://github.com/cryptonomex/graphene/wiki/How%20to%20become%20an%20active%20witness%20in%20BitShares%202.0
Hey pc I appreciate the sentiment for helping, but that isn't very helpful. I've seen that, reviewed that and all that comes from that is more questions which that docs page and you didn't really address.
Puppies reply is a totally different set of steps, also not mentioned in that docs page.
Puppies, I tried to use the cmd you mentions (create_witness(string owner_account, string url, bool broadcast)), but don't know how to specify the args. Could you be more specific? It sounds like an account with lifetime membership is prerequisite. How do I create the "owner account"? Is that where pc's reference comes into play?
If so, can anyone answer the question on if the import can utilize the 0.9.3 json export, and if not why not? Seems like that 1.0 -> 2.0 import should be high on the testing priorities at this stage, not to mention how much easier that would be to the process described in that doc xeroc wrote when testing first started.
Bm said he wiped all the delegates/witnesses from the genesis so you need to reregister your account as a witness in this testnet.
You can still import your 1.0 delegate, it just won't be a witness. I had a look at xerocs wiki and it does show how to register a witness.
-
spartako, thank you again for sending CORE! I managed to upgrade my account now :)
Unfortunately, I'm still short 5000 CORE to actually create a witness. Could someone please send another 5000 to account:
'bitspace-testaccount1'?
In case anyone wonders, the total CORE amount needed to register a witness is 15000 CORE. 10k for the account upgrade + 5K for creating the witness (unless I'm missing something)
-
spartako, thank you again for sending CORE! I managed to upgrade my account now :)
Unfortunately, I'm still short 5000 CORE to actually create a witness. Could someone please send another 5000 to account:
'bitspace-testaccount1'?
In case anyone wonders, the total CORE amount needed to register a witness is 15000 CORE. 10k for the account upgrade + 5K for creating the witness (unless I'm missing something)
sent
-
Hey pc I appreciate the sentiment for helping, but that isn't very helpful. I've seen that, reviewed that and all that comes from that is more questions which that docs page and you didn't really address.
Sorry, I thought you weren't aware of it. It really contains everything you need.
Puppies, I tried to use the cmd you mentions (create_witness(string owner_account, string url, bool broadcast)), but don't know how to specify the args. Could you be more specific? It sounds like an account with lifetime membership is prerequisite. How do I create the "owner account"? Is that where pc's reference comes into play?
Here's what you do:
1. Dump the private key of delegate.verbaltech, as described in the wiki page
2. Start witness_node and cli_wallet
3. Import the delegate.verbaltech account into the wallet ("Basic Account Management"):
import_key delegate.verbaltech "private key from step 1"
4. Either ask for CORE sent to delegate.verbaltech, or export balance keys in old client and import into cli_wallet, as described in the wiki page
5. Upgrade delegate.verbaltech to lifetime member:
upgrade_account delegate.verbaltech true
6. Create witness:
create_witness delegate.verbaltech "https://bitsharestalk.org/index.php/topic,13837.0.html" true
7. Get witness ID and key:
get_witness delegate.verbaltech
8. Dump private keys and find the private key matching the block signing key:
dump_private_keys
9. Put the witness id and key pair into your witness_node's config.ini and restart.
-
I urgently need CORE (at least 850k) for spam tests. Can someone (@bytemaster ?) give me a favor?
Edit: to 'clayop'
-
In the old client I used wallet_account_balance_ids to get the IDs, then used the first one in the list as an arg to blockchain_get_balance and confirmed it had the correct balance. That also provided the owner key for that balance, which I then used with wallet_dump_private_key to get the private key for that specific balance. All this in the 0.9.3c client as per xeroc's docs.
in the cli_wallet of graphene I created account delegate.verbaltech with import_key delegate.verbaltech <account private key>. I then tried to import the balance:
import_balance delegate.verbaltech [<balance private key from 0.9.3c>] true
2850496ms th_a wallet.cpp:3138 import_balance ] balances: []
10 assert_exception: Assert Exception
operations.size() > 0: A transaction must have at least one operation
{"trx":{"ref_block_num":0,"ref_block_prefix":0,"expiration":"1970-01-01T00:00:00","operations":[],"extensions":[]}}
th_a transaction.cpp:51 validate
{"name_or_id":"delegate.verbaltech"}
th_a wallet.cpp:3179 import_balance
There's always something in the way - URG! What crazy little syntactical error causes this?
-
Thanks pc! The only snag seems to be the import. Tried many variations of the args (quotes, commas brackets etc) but can't figure it out.
Given the difficulty of transfering the balance, would some CORE whale out there send 15K CORE to delegate.verbaltech so I can upgrade and register?
Thanks !!!
-
bump bump...
-
thanks to spartako(again!) and jtm1 for the CORE!
Witness is up and running now, please vote! :)
get_witness bitspace-testaccount1
{
"id": "1.6.35",
"witness_account": "1.2.8876",
"last_aslot": 0,
"signing_key": "GPH8B2qHWXkRiJpthDcq1TGv3MQEMzuRz6wXRnk2rFyLWLUL4ZYtA",
"vote_id": "1:56",
"total_votes": 1002444239,
"url": "bitspace.no",
"total_missed": 0
}
Can anyone tell me how I can see voting information/percentages/standings? I see that I have 1002444239 votes already, but what does that number mean?
@Thom, it's not much, but...
transfer bitspace-testaccount1 delegate.verbaltech 5000 CORE "passing on the love" true
{
"ref_block_num": 36472,
"ref_block_prefix": 2363118095,
Edit: w00t 4K txs in one block!
http://imgur.com/gallery/4BAxf4n (http://imgur.com/gallery/4BAxf4n)
-
Thanks Spectral! Perhaps I can at least create the witness. If abit joins in later perhaps he can provide the rest of the CORE I need.
Sure wish I knew why the import balance failed. Seems like the command has a missing parameter (operation?) or something.
-
Network seems dead (due to my spam...)
-
My node died with assetion exception.
3133130ms th_a witness.cpp:182 block_production_loo ] Not producing block because it isn't my turn
witness_node: /app/bts/graphene-test5.1/libraries/net/node.cpp:1594: void graphene::net::detail::node_impl::schedule_peer_for_deletion(const peer_connection_ptr&): Assertion `_closing_connections.find(peer_to_delete) == _closing_connections.end()' failed.
Program received signal SIGABRT, Aborted.
[Switching to Thread 0x7ffff4d02700 (LWP 16058)]
0x00007ffff6c01cc9 in __GI_raise (sig=sig@entry=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:56
56 ../nptl/sysdeps/unix/sysv/linux/raise.c: No such file or directory.
(gdb) where
#0 0x00007ffff6c01cc9 in __GI_raise (sig=sig@entry=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:56
#1 0x00007ffff6c050d8 in __GI_abort () at abort.c:89
#2 0x00007ffff6bfab86 in __assert_fail_base (fmt=0x7ffff6d4b830 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n",
assertion=assertion@entry=0x2c535c8 "_closing_connections.find(peer_to_delete) == _closing_connections.end()",
file=file@entry=0x2c524b8 "/app/bts/graphene-test5.1/libraries/net/node.cpp", line=line@entry=1594,
function=function@entry=0x2c5b520 <graphene::net::detail::node_impl::schedule_peer_for_deletion(std::shared_ptr<graphene::net::peer_connection> const&)::__PRETTY_FUNCTION__> "void graphene::net::detail::node_impl::schedule_peer_for_deletion(const peer_connection_ptr&)") at assert.c:92
#3 0x00007ffff6bfac32 in __GI___assert_fail (
assertion=0x2c535c8 "_closing_connections.find(peer_to_delete) == _closing_connections.end()",
file=0x2c524b8 "/app/bts/graphene-test5.1/libraries/net/node.cpp", line=1594,
function=0x2c5b520 <graphene::net::detail::node_impl::schedule_peer_for_deletion(std::shared_ptr<graphene::net::peer_connection> const&)::__PRETTY_FUNCTION__> "void graphene::net::detail::node_impl::schedule_peer_for_deletion(const peer_connection_ptr&)")
at assert.c:101
#4 0x00000000028ab435 in graphene::net::detail::node_impl::schedule_peer_for_deletion (this=0xb3b7e10, peer_to_delete=...)
at /app/bts/graphene-test5.1/libraries/net/node.cpp:1594
#5 0x00000000028c36e0 in graphene::net::detail::node_impl::on_connection_closed (this=0xb3b7e10, originating_peer=0x7fffe014f340)
at /app/bts/graphene-test5.1/libraries/net/node.cpp:2976
#6 0x00000000029a4368 in graphene::net::peer_connection::on_connection_closed (this=0x7fffe014f340,
originating_connection=0x7fffe014f370) at /app/bts/graphene-test5.1/libraries/net/peer_connection.cpp:269
#7 0x00000000029aa037 in graphene::net::detail::message_oriented_connection_impl::read_loop (this=0x7fffe0375d20)
at /app/bts/graphene-test5.1/libraries/net/message_oriented_connection.cpp:217
#8 0x00000000029a9917 in graphene::net::detail::message_oriented_connection_impl::__lambda0::operator() (__closure=0x7fffe052e778)
at /app/bts/graphene-test5.1/libraries/net/message_oriented_connection.cpp:117
#9 0x00000000029aebb0 in fc::detail::void_functor_run<graphene::net::detail::message_oriented_connection_impl::accept()::__lambda0>::run(void *, void *) (functor=0x7fffe052e778, prom=0x7fffe052e770)
at /app/bts/graphene-test5.1/libraries/fc/include/fc/thread/task.hpp:83
#10 0x00000000026cdedf in fc::task_base::run_impl (this=0x7fffe052e780)
at /app/bts/graphene-test5.1/libraries/fc/src/thread/task.cpp:43
#11 0x00000000026cde70 in fc::task_base::run (this=0x7fffe052e780) at /app/bts/graphene-test5.1/libraries/fc/src/thread/task.cpp:32
#12 0x00000000026c2890 in fc::thread_d::run_next_task (this=0x7fffe00008c0)
at /app/bts/graphene-test5.1/libraries/fc/src/thread/thread_d.hpp:498
#13 0x00000000026c2d34 in fc::thread_d::process_tasks (this=0x7fffe00008c0)
at /app/bts/graphene-test5.1/libraries/fc/src/thread/thread_d.hpp:547
#14 0x00000000026c23a7 in fc::thread_d::start_process_tasks (my=140736951486656)
at /app/bts/graphene-test5.1/libraries/fc/src/thread/thread_d.hpp:475
#15 0x0000000002a1dee1 in make_fcontext () at libs/context/src/asm/make_x86_64_sysv_elf_gas.S:64
#16 0x0000000000000000 in ?? ()
-
3128991ms th_a application.cpp:432 handle_transaction ] Got transaction from network
3128994ms th_a application.cpp:432 handle_transaction ] Got transaction from network
3128996ms th_a application.cpp:432 handle_transaction ] Got transaction from network
3128999ms th_a application.cpp:432 handle_transaction ] Got transaction from network
3142070ms th_a db_block.cpp:189 _push_block ] Failed to push new block:
3070000 undo_database_exception: undo database exception
The database does not have enough undo history to support a blockchain with so many missed blocks. Please add a checkpoint if you would like to continue applying blocks beyond this point.
{"recently_missed":1277,"max_undo":1000}
th_a db_update.cpp:86 update_global_dynamic_data
{"next_block.block_num()":36473}
th_a db_block.cpp:508 _apply_block
Killed
-
Unable to connect to the seed node "104.236.51.238:2001". Any idea?
-
Witness is running, just need to be voted in:
get_witness delegate.verbaltech
{
"id": "1.6.36",
"witness_account": "1.2.22503",
"last_aslot": 0,
"signing_key": "GPH6oUevPDj52JK67E199jBAMBuh6CQBQGgkyLtoXKiAvTVCpvYU7",
"vote_id": "1:57",
"total_votes": 0,
"url": "https://bitsharestalk.org/index.php/topic,13837.0.html",
"total_missed": 0
}
+5% to whoever sent the 20K CORE, much appreciated!
edit: Woops, forgot to upgrade! Doing that now... Done.
-
Why my chain is not able to sync ??....
./cli_wallet -w test_wallet --chain-id c746b258deb5e476601488d8dbb98cf6dcacc2dec857fda58514907257d461c3
unlocked >>> info
info
{
"head_block_num": 0,
"head_block_id": "0000000000000000000000000000000000000000",
"head_block_age": "51 hours old",
"next_maintenance_time": "45 years ago",
"chain_id": "c746b258deb5e476601488d8dbb98cf6dcacc2dec857fda58514907257d461c3",
"participation": "100.00000000000000000",
"active_witnesses": [
"1.6.1",
"1.6.2",
"1.6.3",
"1.6.4",
"1.6.5",
"1.6.6",
"1.6.7",
"1.6.8",
"1.6.9",
"1.6.10",
"1.6.11"
],
"active_committee_members": []
-
Mine is 4 hours old!
-
I woke up and found my witness node stuck at
{
"head_block_num": 36472,
"head_block_id": "00008e780f52da8c7a599c3a1d0b8dfeaf3a331d",
"head_block_age": "5 hours old",
"next_maintenance_time": "4 hours ago",
"chain_id": "c746b258deb5e476601488d8dbb98cf6dcacc2dec857fda58514907257d461c3",
"participation": "21.87500000000000000",
I found the error from the console log.
1743466ms th_a db_block.cpp:189 _push_block ] Failed to push new block:
3070000 undo_database_exception: undo database exception
The database does not have enough undo history to support a blockchain with so many missed blocks. Please add a checkpoint if you would like to continue applying blocks beyond this point.
{"recently_missed":23405,"max_undo":1000}
th_a db_update.cpp:86 update_global_dynamic_data
{"next_block.block_num()":36473}
th_a db_block.cpp:508 _apply_block
It seems the network cannot proceed due to 'too many missed blocks' - possibly due to the spam.
-
early morning:
{
"head_block_num": 36472,
"head_block_id": "00008e780f52da8c7a599c3a1d0b8dfeaf3a331d",
"head_block_age": "6 hours old",
"next_maintenance_time": "6 hours ago",
"chain_id": "c746b258deb5e476601488d8dbb98cf6dcacc2dec857fda58514907257d461c3",
"participation": "21.87500000000000000",
"active_witnesses": [
edit: p2p log
2015-10-04T06:33:19 p2p:message read_loop broadcast ] broadcasting trx: {"trx":{"ref_block_num":36472,"ref_block_prefix":2363118095,"expiration":"2015-10-03T23:49:03","operations":[[19,{"fee":{"amount":100000,"asset_id":"1.3.0"},"publisher":"1.2.91956","asset_id":"1.3.379","feed":{"settlement_price":{"base":{"amount":211,"asset_id":"1.3.379"},"quote":{"amount":63128,"asset_id":"1.3.0"}},"maintenance_collateral_ratio":1750,"maximum_short_squeeze_ratio":1500,"core_exchange_rate":{"base":{"amount":66284,"asset_id":"1.3.0"},"quote":{"amount":211,"asset_id":"1.3.379"}}},"extensions":[]}]],"extensions":[],"signatures":["1f5bb7a3f99bbd9dc22df0605758ce021a29a1b880f7e034edcd880d1b5e5c45622748b20976134e0a7faf34845058fc451654cae1f0d3a89f16cdc864a5f434e0"]}} node.cpp:4839
-
Sorry guys... I killed the testnet with spamming :'(
-
Sorry guys... I killed the testnet with spamming :'(
that's becoming a success = )
-
would be nice if those stress tests would be preanounced.
so we could observe the nodes behaviour (cpu/io load etc) and
bytemaster could set up new testnet if things go wrong
and the network dies.
Before the last stress test (when the network was mostly idle), I have noticed my witness was constantly consuming
10-15% cpu which was strange, also mising some blocks. So I switched
to backup witness. The new witness was under 1% cpu and reported
extra 20 missed blocks, althought the switchower was immediate using update_witness
without any block loss. So it seems the original witness lost track
of lost blocks somehow too.
Also would it not be better to run stress tests without debug mode on
to get better performance ?
-
(http://i.imgur.com/djZ02U8.png)
last signs of life .....
-
would be nice if those stress tests would be preanounced.
so we could observe the nodes behaviour (cpu/io load etc) and
bytemaster could set up new testnet if things go wrong
and the network dies.
Before the last stress test (when the network was mostly idle), I have noticed my witness was constantly consuming
10-15% cpu which was strange, also mising some blocks. So I switched
to backup witness. The new witness was under 1% cpu and reported
extra 20 missed blocks, althought the switchower was immediate using update_witness
without any block loss. So it seems the original witness lost track
of lost blocks somehow too.
Also would it not be better to run stress tests without debug mode on
to get better performance ?
Yea unless you want to find bugs first which they are going to have to do first anyway
-
I assume the network is still dead.... or have I locally the problem?
-
It's dead for me. I tried --resync-blockchain but it didn't work. Not getting any blocks from the seed node.
-
My witness (which was not voted in) currently has the following output, and just keeps going like that... 'bitspace-testaccount1'
3221000ms th_a witness.cpp:182 block_production_loo ] Not producing block because it isn't my turn
3222000ms th_a witness.cpp:182 block_production_loo ] Not producing block because it isn't my turn
3223000ms th_a witness.cpp:182 block_production_loo ] Not producing block because it isn't my turn
3224000ms th_a witness.cpp:182 block_production_loo ] Not producing block because it isn't my turn
The CLI interface is still working.
-
Can't we restart the network with checkpoints instead of waiting for bm to do it? I'm assuming the stress testing just delayed some nodes' block production and caused forking until no branch had minimum participation.
-
Can't we restart the network with checkpoints instead of waiting for bm to do it? I'm assuming the stress testing just delayed some nodes' block production and caused forking until no branch had minimum participation.
Sounds reasonable, so how do you determine a correct checkpoint parameter value to use in the config.ini file? I have several VPSs ready to go that will serve as a seed once that is determined.
-
Can't we restart the network with checkpoints instead of waiting for bm to do it? I'm assuming the stress testing just delayed some nodes' block production and caused forking until no branch had minimum participation.
Yes, I think this is the problem, but how to do that? If we use --enable-stale-production every one create own fork, so we need a way to coordinate witness but I don't know how
-
After updating from test4 to test5, I get this when I try to run che cli
"Chain ID in wallet file does not match specified chain ID"
What should I do?
-
Can't we restart the network with checkpoints instead of waiting for bm to do it? I'm assuming the stress testing just delayed some nodes' block production and caused forking until no branch had minimum participation.
Sounds reasonable, so how do you determine a correct checkpoint parameter value to use in the config.ini file? I have several VPSs ready to go that will serve as a seed once that is determined.
I'm not sure, but I think we need all witnesses to add this checkpoint to their config files: ["36472", "00008e780f52da8c7a599c3a1d0b8dfeaf3a331d"]
Then we probably need one person to enable stale production, and then it's possible that everyone will need to additionally add the first new block on the stale chain as a checkpoint.
Regardless, this is something we need to test and figure out how to do now on the test network. We shouldn't plan on being totally reliant on bm to fix it if anything breaks.
-
Dan explained it early in the first test net thread. The issue we will have is that without the unit witnesses we will crash again quickly.
We could do a dirty hack and increase the allowed missed blocks as well, but that might not be the best idea.
Actually there is a trick to restarting block production after so much time has passed.
First add a the following checkpoint
[HEADNUM+1, "00000000.....00"]
With that checkpoint you will be able to produce the next block. Once you have produced the block, other nodes can add a checkpoint with that freshly produced block and you will be up and running again.
Every time a checkpoint is reached it resets the required undo history to 0. Adding a checkpoint at HEADNUM will reset it to 0, but the next block you produce will have missed over 1000 blocks so is immediately beyond the reach. Therefore, we need to "checkpoint" the "next block" for which we do not know the ID yet.
-
After updating from test4 to test5, I get this when I try to run che cli
"Chain ID in wallet file does not match specified chain ID"
What should I do?
rm wallet.json
-
OK so are we ready to coordinate and establish a new checkpoint?
Has that already been done?
If not I think we should set a time identify whose block will be the first one in the new checkpoint, and then we can setup new seed nodes.
-
I have a registered witness, but I'm not yet voted in: 'bitspace-testaccount1'. Is there anything I can do to help?
I can check back here every half hour or so today.
-
OK so are we ready to coordinate and establish a new checkpoint?
Has that already been done?
If not I think we should set a time identify whose block will be the first one in the new checkpoint, and then we can setup new seed nodes.
I am trying to reset locally. If I can I will post directions. Although if we leave allowed missed blocks at 1000 the network wont last long.
-
rm wallet.json
Thanks puppies, it works now!
Can someone send me some CORE for the account upgrade? Thanks in advance :)
account name: bhuz
-
rm wallet.json
Thanks puppies, it works now!
Can someone send me some CORE for the account upgrade? Thanks in advance :)
account name: bhuz
With the network down, there's no way to send it.
-
lol, you are right xD
-
OK so are we ready to coordinate and establish a new checkpoint?
Has that already been done?
If not I think we should set a time identify whose block will be the first one in the new checkpoint, and then we can setup new seed nodes.
I am trying to reset locally. If I can I will post directions. Although if we leave allowed missed blocks at 1000 the network wont last long.
will try to catch up if there is a way
-
it has been whole day and my block number is staying at 0 ........ something wrong
locked >>> info
info
{
"head_block_num": 0,
"head_block_id": "0000000000000000000000000000000000000000",
"head_block_age": "64 hours old",
"next_maintenance_time": "45 years ago",
"chain_id": "c746b258deb5e476601488d8dbb98cf6dcacc2dec857fda58514907257d461c3",
"participation": "100.00000000000000000",
"active_witnesses": [
"1.6.1",
"1.6.2",
"1.6.3",
"1.6.4",
"1.6.5",
"1.6.6",
"1.6.7",
"1.6.8",
"1.6.9",
"1.6.10",
"1.6.11"
],
"active_committee_members": []
}
locked >>>
-
(http://i.imgur.com/djZ02U8.png)
last signs of life .....
I'm wondering what these witnesses' VPS specs are.
-
(http://i.imgur.com/djZ02U8.png)
last signs of life .....
I'm wondering what these witnesses' VPS specs are.
What is the required minimum initial VPS spec?
-
What exactly are you guys asking? (iHashFury, clayop)
Are you asking about config.ini values, RAM/CPU/Network/OpSys or what exactly?
I'm sure you have something in mind, some hypothesis. Care to share?
-
OK so are we ready to coordinate and establish a new checkpoint?
Has that already been done?
If not I think we should set a time identify whose block will be the first one in the new checkpoint, and then we can setup new seed nodes.
I am trying to reset locally. If I can I will post directions. Although if we leave allowed missed blocks at 1000 the network wont last long.
will try to catch up if there is a way
I can't get it to work without enabling stale block production. We will have to wait for someone else to figure it out. I have to go to work. I'll try to keep an eye on this thread, but will probably be pretty busy.
(http://i.imgur.com/djZ02U8.png)
last signs of life .....
I'm wondering what these witnesses' VPS specs are.
My witness is currently running on a dedicated server Intel Xeon quad core at 3.2Ghz, and 16GB ram. I still miss random blocks. Perhaps its due to networking, and or storage, as the server has a mechanical hdd.
-
minimum VPS RAM/SWAP/CPUs/NETWORK?
so witnesses know what min hardware spec is required
-
Yes, we need to describe these physical specs and weigh their performance under various load conditions.
Wackou and I have 7 VPS servers so far, all on the low end of the spectrum, relatively speaking. The lowest being 1GB RAM. Wackou will have to give you the specs for the VPSs he has setup, which the most recent ones were on Vultr with 1GB RAM.
I have 4:
Vultr with a 4GB RAM / Dual core
Crown Cloud 1 2 GB RAM Dual core
Crown Cloud 2 4 GB RAM Dual Core
Bithost 2 GB RAM / Dual core
The above is from memory, I can be more specific when I get back to my workstation.
BM doesn't seem to think our VPS specs should be difficult to meet. Wackou chose 1GB b/c of the low RAM use he observed in testing.
These specs are much more important while testing, primarily the spam testing. This is why the requirements for heavy load testing are different than if we're trying to functionally test transaction variations. It's all necessary testing, it's a matter of coordination. IMO we should first look to verify functionality then look at performance. If something fails it might become difficult to determine if it's a feature logic issue or if it's due to timing / network / VPS limitations.
-
What exactly are you guys asking? (iHashFury, clayop)
Are you asking about config.ini values, RAM/CPU/Network/OpSys or what exactly?
I'm sure you have something in mind, some hypothesis. Care to share?
I'm thinking that minimum requirements for 1000 tps may be higher than BM expected.
-
You may be right clayop, but do we need to concern ourselves with that right now? IMO it's more important to address the functionality and feature set that will be available at launch.
Out of the gate we probably won't require 1000 TPS. I do recognize that is only 10% of our claims, and if we DID have a massive response on the launch and we couldn't handle it, it would be very bad indeed.
I'm not sure where the best balance is and what our testing goals should be, but I think we need to get serious very quickly and figure that out, devise a plan to reach those goals and coordinate our efforts to achieve them.
-
I'm not sure where the best balance is and what our testing goals should be, but I think we need to get serious very quickly and figure that out, devise a plan to reach those goals and coordinate our efforts to achieve them.
+5%
That sounds about right. A launch target TPS could be specified, the minimum requirement to meet that target would be identified, so the recommended witness minimum specification would be ready at launch.
Maybe there should even be a max TPS cap (possibly a votable parameter), to protect the network from overload. Is spam an attack vector into BitShares?
-
You may be right clayop, but do we need to concern ourselves with that right now? IMO it's more important to address the functionality and feature set that will be available at launch.
Out of the gate we probably won't require 1000 TPS. I do recognize that is only 10% of our claims, and if we DID have a massive response on the launch and we couldn't handle it, it would be very bad indeed.
I'm not sure where the best balance is and what our testing goals should be, but I think we need to get serious very quickly and figure that out, devise a plan to reach those goals and coordinate our efforts to achieve them.
Exactly.. and if each tx costs $0.01, 100 TPS "attacks" cost $1/second, or $3,600/hr. :)
-
You may be right clayop, but do we need to concern ourselves with that right now? IMO it's more important to address the functionality and feature set that will be available at launch.
Out of the gate we probably won't require 1000 TPS. I do recognize that is only 10% of our claims, and if we DID have a massive response on the launch and we couldn't handle it, it would be very bad indeed.
I'm not sure where the best balance is and what our testing goals should be, but I think we need to get serious very quickly and figure that out, devise a plan to reach those goals and coordinate our efforts to achieve them.
Exactly.. and if each tx costs $0.01, 100 TPS "attacks" cost $1/second, or $3,600/hr. :)
Well, that's not really expensive... 3600 USD will give you:
1 hour of 100TPS attack or... 30 seconds of 12000 TPS attack. Is that not enough to bring down the network?
-
You may be right clayop, but do we need to concern ourselves with that right now? IMO it's more important to address the functionality and feature set that will be available at launch.
Out of the gate we probably won't require 1000 TPS. I do recognize that is only 10% of our claims, and if we DID have a massive response on the launch and we couldn't handle it, it would be very bad indeed.
I'm not sure where the best balance is and what our testing goals should be, but I think we need to get serious very quickly and figure that out, devise a plan to reach those goals and coordinate our efforts to achieve them.
Exactly.. and if each tx costs $0.01, 100 TPS "attacks" cost $1/second, or $3,600/hr. :)
Well, that's not really expensive... 3600 USD will give you:
1 hour of 100TPS attack or... 30 seconds of 12000 TPS attack. Is that not enough to bring down the network?
We can thank them for the money and resurrect the network from the last block :D
-
You may be right clayop, but do we need to concern ourselves with that right now? IMO it's more important to address the functionality and feature set that will be available at launch.
Out of the gate we probably won't require 1000 TPS. I do recognize that is only 10% of our claims, and if we DID have a massive response on the launch and we couldn't handle it, it would be very bad indeed.
I'm not sure where the best balance is and what our testing goals should be, but I think we need to get serious very quickly and figure that out, devise a plan to reach those goals and coordinate our efforts to achieve them.
I agree with the point that we don't need 1000 TPS at the beginning. But it shouldn't be an attacking vector against witness nodes, so we have to find out a way (e.g. capping maximum TPS).
FYI, based on the current BTS price, the 'attack' requires $18,000 and costs about $3,700 only.
-
You may be right clayop, but do we need to concern ourselves with that right now? IMO it's more important to address the functionality and feature set that will be available at launch.
Out of the gate we probably won't require 1000 TPS. I do recognize that is only 10% of our claims, and if we DID have a massive response on the launch and we couldn't handle it, it would be very bad indeed.
I'm not sure where the best balance is and what our testing goals should be, but I think we need to get serious very quickly and figure that out, devise a plan to reach those goals and coordinate our efforts to achieve them.
Exactly.. and if each tx costs $0.01, 100 TPS "attacks" cost $1/second, or $3,600/hr. :)
Well, that's not really expensive... 3600 USD will give you:
1 hour of 100TPS attack or... 30 seconds of 12000 TPS attack. Is that not enough to bring down the network?
I killed the network with 15 minutes of 1000 TPS.
-
What kind of transactions were you making, clayop?
IIRC, transfers will cost approx $0.20 for non-members and $0.04 otherwise.. at 4 cents, 1000 TPS for 15 mins is $36,000.
Well, that's not really expensive... 3600 USD will give you:
1 hour of 100TPS attack or... 30 seconds of 12000 TPS attack. Is that not enough to bring down the network?
True, good point.. perhaps eventually the network will automatically scales up fees.
-
What kind of transactions were you making, clayop?
IIRC, transfers will cost approx $0.20 for non-members and $0.04 otherwise.. at 4 cents, 1000 TPS for 15 mins is $36,000.
Sell assets only costs 5 CORE instead of 20 CORE of transfer.
-
Vultr with a1GB RAM 1 CPU - only running min services and witness_node.
This suggests maybe 2 CPUs would be better for 1000tps as the CPU peaks at about 140%
I will double my specs for the next test and post the results of the stress test.
Any thoughts?
(http://i.imgur.com/O45bBUo.jpg)
-
I will double my specs for the next test and post the results of the stress test.
Great. Test witnesses may have to post their vps specs.
-
My testwitness runs on a dedicated server with 8 cores and 16gb of ram.
Should be enough for high performance.
-
delegate-clayop
1 CPU (Intel Ivy Bridge)
3.75 G memory
-
In the old client I used wallet_account_balance_ids to get the IDs, then used the first one in the list as an arg to blockchain_get_balance and confirmed it had the correct balance. That also provided the owner key for that balance, which I then used with wallet_dump_private_key to get the private key for that specific balance. All this in the 0.9.3c client as per xeroc's docs.
in the cli_wallet of graphene I created account delegate.verbaltech with import_key delegate.verbaltech <account private key>. I then tried to import the balance:
import_balance delegate.verbaltech [<balance private key from 0.9.3c>] true
2850496ms th_a wallet.cpp:3138 import_balance ] balances: []
10 assert_exception: Assert Exception
operations.size() > 0: A transaction must have at least one operation
{"trx":{"ref_block_num":0,"ref_block_prefix":0,"expiration":"1970-01-01T00:00:00","operations":[],"extensions":[]}}
th_a transaction.cpp:51 validate
{"name_or_id":"delegate.verbaltech"}
th_a wallet.cpp:3179 import_balance
There's always something in the way - URG! What crazy little syntactical error causes this?
It seems the 02-oct snapshot has not been made on october 2nd .. I also miss many of my funds that I though would be there ..
-
2046483ms th_a application.cpp:516 get_item ] Couldn't find block 00008e220adc1561e0ceb4964000000000000000 -- corresponding ID in our chain is 00008e220adc1561e0ceb496e2fe61effc44196e
2046486ms th_a application.cpp:432 handle_transaction ] Got transaction from network
./run.sh: line 1: 8080 Segmentation fault ./witness_node --genesis-json "oct-02-testnet-genesis.json" -d oct02 -w \""1.6.0"\" -w \""1.6.1"\" -w \""1.6.2"\" -w \""1.6.3"\" -w \""1.6.4"\" -w \""1.6.5"\" -w \""1.6.6"\" -w \""1.6.7"\" -w \""1.6.8"\" -w \""1.6.9"\" -w \""1.6.10"\" -w \""1.6.11"\" --replay-blockchain
This is what the init node said before it died during the flood. We are looking into what could have caused it.
As far as release plans go, we will protect the network from excessive flooding by rate limiting transaction throughput in the network code. We recently made a change that allowed a peer to fetch more than one transaction at a time and that change is what allowed us to hit 1000+ TPS. That change had the side effect of making the network vulnerable to flooding. For the BTS2 network we will revert to only fetching 1 transaction at a time which will limit throughput of the network to under 100 TPS. This will only be a limit in the P2P code which can be upgraded at any time without requiring a hard fork.
If the network is generating anywhere near 100 TPS per second then the network will be earning more than $1M per day in fees and our market cap would be closer to Bitcoins market cap. In other words, this should have 0 impact on customer experience over the next several months. By the time we start gaining traction like that we will have worked through the kinks of getting a higher throughput network layer.
-
If the network is generating anywhere near 100 TPS per second then the network will be earning more than $1M per day in fees and our market cap would be closer to Bitcoins market cap. In other words, this should have 0 impact on customer experience over the next several months. By the time we start gaining traction like that we will have worked through the kinks of getting a higher throughput network layer.
(https://s3-eu5.ixquick.com/cgi-bin/serveimage?url=http%3A%2F%2Flivinginthelead.files.wordpress.com%2F2013%2F02%2Fglass-half-empty-glass-half-full-always-full.jpg%3Fw%3D551%26amp%3Bh%3D714&sp=1a14df7796608387c4a82d5e92684eba)
-
In the old client I used wallet_account_balance_ids to get the IDs, then used the first one in the list as an arg to blockchain_get_balance and confirmed it had the correct balance. That also provided the owner key for that balance, which I then used with wallet_dump_private_key to get the private key for that specific balance. All this in the 0.9.3c client as per xeroc's docs.
in the cli_wallet of graphene I created account delegate.verbaltech with import_key delegate.verbaltech <account private key>. I then tried to import the balance:
import_balance delegate.verbaltech [<balance private key from 0.9.3c>] true
2850496ms th_a wallet.cpp:3138 import_balance ] balances: []
10 assert_exception: Assert Exception
operations.size() > 0: A transaction must have at least one operation
{"trx":{"ref_block_num":0,"ref_block_prefix":0,"expiration":"1970-01-01T00:00:00","operations":[],"extensions":[]}}
th_a transaction.cpp:51 validate
{"name_or_id":"delegate.verbaltech"}
th_a wallet.cpp:3179 import_balance
There's always something in the way - URG! What crazy little syntactical error causes this?
It seems the 02-oct snapshot has not been made on october 2nd .. I also miss many of my funds that I though would be there ..
Doh! No wonder I couldn't import the balance! I did the transfer on the 3rd so it wasn't in the snapshot. This would be an important point to make in your docs xeroc, that never occurred to me. Also, the error message is worthless as to the cause, if it is due to a missing value.
This also highlights the need for a brief explanation of just what the genesis.json file contains that each testnet requires. I asked but no replies given to whether the genesis.json file requires updating each testnet for the sole purpose of getting the latest balances from the 0.9.x chain, OR if there is some other reason. Seems like the same genesis.json could be used for multiple test iterations if it was just a snapshot.
-
2046483ms th_a application.cpp:516 get_item ] Couldn't find block 00008e220adc1561e0ceb4964000000000000000 -- corresponding ID in our chain is 00008e220adc1561e0ceb496e2fe61effc44196e
2046486ms th_a application.cpp:432 handle_transaction ] Got transaction from network
./run.sh: line 1: 8080 Segmentation fault ./witness_node --genesis-json "oct-02-testnet-genesis.json" -d oct02 -w \""1.6.0"\" -w \""1.6.1"\" -w \""1.6.2"\" -w \""1.6.3"\" -w \""1.6.4"\" -w \""1.6.5"\" -w \""1.6.6"\" -w \""1.6.7"\" -w \""1.6.8"\" -w \""1.6.9"\" -w \""1.6.10"\" -w \""1.6.11"\" --replay-blockchain
This is what the init node said before it died during the flood. We are looking into what could have caused it.
As far as release plans go, we will protect the network from excessive flooding by rate limiting transaction throughput in the network code. We recently made a change that allowed a peer to fetch more than one transaction at a time and that change is what allowed us to hit 1000+ TPS. That change had the side effect of making the network vulnerable to flooding. For the BTS2 network we will revert to only fetching 1 transaction at a time which will limit throughput of the network to under 100 TPS. This will only be a limit in the P2P code which can be upgraded at any time without requiring a hard fork.
If the network is generating anywhere near 100 TPS per second then the network will be earning more than $1M per day in fees and our market cap would be closer to Bitcoins market cap. In other words, this should have 0 impact on customer experience over the next several months. By the time we start gaining traction like that we will have worked through the kinks of getting a higher throughput network layer.
A few days did not see you, I miss you very much
-
For the BTS2 network we will revert to only fetching 1 transaction at a time which will limit throughput of the network to under 100 TPS. This will only be a limit in the P2P code which can be upgraded at any time without requiring a hard fork.
That means that we will not have any problem (performance wise) to continue with more than 17 witnesses the early days (we could continue to stick to 101)?
At least until we make the upgrade for more than 100 tps in future...
-
2046483ms th_a application.cpp:516 get_item ] Couldn't find block 00008e220adc1561e0ceb4964000000000000000 -- corresponding ID in our chain is 00008e220adc1561e0ceb496e2fe61effc44196e
2046486ms th_a application.cpp:432 handle_transaction ] Got transaction from network
./run.sh: line 1: 8080 Segmentation fault ./witness_node --genesis-json "oct-02-testnet-genesis.json" -d oct02 -w \""1.6.0"\" -w \""1.6.1"\" -w \""1.6.2"\" -w \""1.6.3"\" -w \""1.6.4"\" -w \""1.6.5"\" -w \""1.6.6"\" -w \""1.6.7"\" -w \""1.6.8"\" -w \""1.6.9"\" -w \""1.6.10"\" -w \""1.6.11"\" --replay-blockchain
This is what the init node said before it died during the flood. We are looking into what could have caused it.
As far as release plans go, we will protect the network from excessive flooding by rate limiting transaction throughput in the network code. We recently made a change that allowed a peer to fetch more than one transaction at a time and that change is what allowed us to hit 1000+ TPS. That change had the side effect of making the network vulnerable to flooding. For the BTS2 network we will revert to only fetching 1 transaction at a time which will limit throughput of the network to under 100 TPS. This will only be a limit in the P2P code which can be upgraded at any time without requiring a hard fork.
If the network is generating anywhere near 100 TPS per second then the network will be earning more than $1M per day in fees and our market cap would be closer to Bitcoins market cap. In other words, this should have 0 impact on customer experience over the next several months. By the time we start gaining traction like that we will have worked through the kinks of getting a higher throughput network layer.
Is there any other way to cap tps? (E.G.GRAPHENE_DEFAULT_MAX_TRANSACTION_SIZE)
-
VPS Specs:
The following 3 hosts are all Vultr VPS nodes, 1GB RAM, 1 CPU core, 2TB bandwidth, 20GB SSD.
They also run Debian 8.0 64 bit OS:
1) New Jersey -- with DDoS protection
2) Amsterdam -- with DDoS protection
3) Tokyo -- No DDoS protection
These nodes all run Ubuntu 14.04, 64 bit OS:
4) Vultr 4GB RAM, 2 CPU core, 1.6 TB bandwidth, 90GB disk in Sydney Austrailia
5) Crown Cloud 2GB RAM, 2 CPU core, 3TB bandwidth, 30GB disk in Frankfurt Germany
6) Crown Cloud 4GB RAM, 2 CPU core, 3TB bandwidth, 40GB disk in Los Angeles, USA
7) Bithost 2GB RAM, 2 CPU core, 3TB bandwidth, 40GB disk in Singnapore
-
I will use for the test network a VPS that is located in Germany:
OS: Ubuntu Linux 64bit
CPU: Intel(R) Xeon(R) CPU E5-2620 v2 @ 2.10GHz, 2 cores
RAM: 6 GB
SSD: 500 GB
bandwidth: 100 Mbit/s port
-
If I participate in the TPS test (depends on when it happens) I will use:
6) Crown Cloud 4GB RAM, 2 CPU core, 3TB bandwidth, 40GB disk in Los Angeles, USA
-
https://bitsharestalk.org/index.php/topic,18751.msg241297.html#msg241297