./witness_node --rpc-endpoint "127.0.0.1:8090" --genesis-json aug-14-test-genesis.json -d test_net -s 104.200.28.117:61705
./cli_wallet -w wallet_name --chain-id 081401ede64c8fe30b23c91d7ab8750103acb1a39548a866fb562f2edf4627d6
If you have a mac, download the draft version of BitShares 0.9.2 which has a new api callThis download is broken.. showing a 404 page.
https://github.com/bitshares/bitshares/releases/tag/untagged-4166986045ff28284dc4
If you have a mac, download the draft version of BitShares 0.9.2 which has a new api callThis download is broken.. showing a 404 page.
https://github.com/bitshares/bitshares/releases/tag/untagged-4166986045ff28284dc4
Is there a way to build from source? Best if for Ubuntu.
git pull
git checkout bitshares
cd libraries
rm -r fc
git clone https://github.com/cryptonomex/fc.git
cd fc/vendor
rm -r secp256k1-zkp
git clone https://github.com/cryptonomex/secp256k1-zkp.git
cd ../../..
git submodule update --init --recursive
cmake .
make bitshares_client
On Ubuntu 14.04 this worked to update the client. This is assuming that you already have the build directory set up. checking out bitshares breaks the submodules so you have to pull a couple manually. Works and exports keys into a nice encrypted json.
7 bad_cast_exception: Bad Cast
Invalid cast from string_type to Array
{"type":"string_type"}
th_a variant.cpp:530 get_array
If you have a mac, download the draft version of BitShares 0.9.2 which has a new api call
https://github.com/bitshares/bitshares/releases/tag/untagged-4166986045ff28284dc4
The import process described in the how-to is a stop-gap measure. We have already done significant work to make that easier.+5%
At the end of the day it will be as easy as:
1. export BTS 1 wallet
2. import into BTS 2
7 bad_cast_exception: Bad Cast
Invalid cast from string_type to Array
{"type":"string_type"}
th_a variant.cpp:530 get_array
Each of these balances can be investigated via:
BitShares0.9: >>> blockchain_get_balance BTSAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
....
"asset_id": 0, <- asset_id (0: BTS)
"data": {
"owner": "BTSOOOOOOOOOOOOOOOOOOOOOOOOOOOOOWNER", <- address
...
"balance": 0, <- balance
...
The required part (the owner of the balance) is denoted as owner. Pick one or more address for BTS balances and dump the corresponding private key(s) with:
BitShares0.9: >>> wallet_dump_private_key BTSOOOOOOOOOOOOOOOOOOOOOOOOOOOOOWNER
"5......." # the <balance wif key>
import_balance betax [5....] true
And I also solved it :
The syntax isCode: [Select]import_balance betax [5....] true
It is expecting an array as it says in the error doh!
Oh .. thanks for pointing this out. I will fix the wiki as soon as I get back to my computer .. sorry for the trouble
parse_error_exception: Parse Error
Can't parse a number with two decimal places
{}
th_a json.cpp:277 number_from_stream
{"str":"1.6.5155"}
th_a json.cpp:478 from_string
rethrow
{}
th_a witness.cpp:88 plugin_initialize
3312033ms th_a witness.cpp:70 plugin_initialize ] key_id_to_wif_pair: ["GPH6MRyAjQq8ud7hVNYcfnVPJqcVpscN5So8BhtHuGYqET5GDW5CV","wif key"]
3312121ms th_a thread.cpp:95 thread ] name:ntp tid:139624819738368
3312121ms ntp ntp.cpp:77 request_now ] resolving... ["pool.ntp.org",123]
3312121ms th_a thread.cpp:95 thread ] name:p2p tid:139624800855808
3312289ms ntp ntp.cpp:81 request_now ] sending request to 172.82.134.52:123
3312289ms th_a application.cpp:116 reset_p2p_node ] Adding seed node 104.200.28.117:61705
3312290ms th_a application.cpp:128 reset_p2p_node ] Configured p2p node to listen on 0.0.0.0:53031
3312292ms th_a application.cpp:178 reset_websocket_serv ] Configured websocket rpc to listen on 127.0.0.1:8090
[b]3312292ms th_a witness.cpp:143 plugin_startup ] No witnesses configured! Please add witness IDs and private keys to configuration.[/b]
3312292ms th_a main.cpp:165 main ] Started witness node on a chain with 0 blocks.
3312292ms th_a main.cpp:166 main ] Chain ID is cefacd8adb8bee2bf3b757e882d2828297ceb67b1882982cbde882688ecb46a8
3312395ms ntp ntp.cpp:147 read_loop ] received ntp reply from 172.82.134.52:123
3312395ms ntp ntp.cpp:161 read_loop ] ntp offset: 1679989, round_trip_delay 106243
3312395ms ntp ntp.cpp:177 read_loop ] ntp_delta_time updated to 1679989
Thanks clayop.. your brackets on the other post hinted the answer. Xeroc don't worry your instructions are really good.
Another problem, now when I try to start the witness, I get a decimal parse error for --witness-id "1.6.5155"Code: [Select]parse_error_exception: Parse Error
Can't parse a number with two decimal places
{}
th_a json.cpp:277 number_from_stream
{"str":"1.6.5155"}
th_a json.cpp:478 from_string
rethrow
{}
th_a witness.cpp:88 plugin_initialize
I have tried to configure the witness-id in the config.ini but I get this warning:Code: [Select]3312033ms th_a witness.cpp:70 plugin_initialize ] key_id_to_wif_pair: ["GPH6MRyAjQq8ud7hVNYcfnVPJqcVpscN5So8BhtHuGYqET5GDW5CV","wif key"]
3312121ms th_a thread.cpp:95 thread ] name:ntp tid:139624819738368
3312121ms ntp ntp.cpp:77 request_now ] resolving... ["pool.ntp.org",123]
3312121ms th_a thread.cpp:95 thread ] name:p2p tid:139624800855808
3312289ms ntp ntp.cpp:81 request_now ] sending request to 172.82.134.52:123
3312289ms th_a application.cpp:116 reset_p2p_node ] Adding seed node 104.200.28.117:61705
3312290ms th_a application.cpp:128 reset_p2p_node ] Configured p2p node to listen on 0.0.0.0:53031
3312292ms th_a application.cpp:178 reset_websocket_serv ] Configured websocket rpc to listen on 127.0.0.1:8090
[b]3312292ms th_a witness.cpp:143 plugin_startup ] No witnesses configured! Please add witness IDs and private keys to configuration.[/b]
3312292ms th_a main.cpp:165 main ] Started witness node on a chain with 0 blocks.
3312292ms th_a main.cpp:166 main ] Chain ID is cefacd8adb8bee2bf3b757e882d2828297ceb67b1882982cbde882688ecb46a8
3312395ms ntp ntp.cpp:147 read_loop ] received ntp reply from 172.82.134.52:123
3312395ms ntp ntp.cpp:161 read_loop ] ntp offset: 1679989, round_trip_delay 106243
3312395ms ntp ntp.cpp:177 read_loop ] ntp_delta_time updated to 1679989
Any thoughts ?
Note: I am doing this in between family time, hence the delay on the postings
--witness-id \""1.6.xxxx"\
Thanks clayop.. your brackets on the other post hinted the answer. Xeroc don't worry your instructions are really good.
Another problem, now when I try to start the witness, I get a decimal parse error for --witness-id "1.6.5155"Code: [Select]parse_error_exception: Parse Error
Can't parse a number with two decimal places
{}
th_a json.cpp:277 number_from_stream
{"str":"1.6.5155"}
th_a json.cpp:478 from_string
rethrow
{}
th_a witness.cpp:88 plugin_initialize
I have tried to configure the witness-id in the config.ini but I get this warning:Code: [Select]3312033ms th_a witness.cpp:70 plugin_initialize ] key_id_to_wif_pair: ["GPH6MRyAjQq8ud7hVNYcfnVPJqcVpscN5So8BhtHuGYqET5GDW5CV","wif key"]
3312121ms th_a thread.cpp:95 thread ] name:ntp tid:139624819738368
3312121ms ntp ntp.cpp:77 request_now ] resolving... ["pool.ntp.org",123]
3312121ms th_a thread.cpp:95 thread ] name:p2p tid:139624800855808
3312289ms ntp ntp.cpp:81 request_now ] sending request to 172.82.134.52:123
3312289ms th_a application.cpp:116 reset_p2p_node ] Adding seed node 104.200.28.117:61705
3312290ms th_a application.cpp:128 reset_p2p_node ] Configured p2p node to listen on 0.0.0.0:53031
3312292ms th_a application.cpp:178 reset_websocket_serv ] Configured websocket rpc to listen on 127.0.0.1:8090
[b]3312292ms th_a witness.cpp:143 plugin_startup ] No witnesses configured! Please add witness IDs and private keys to configuration.[/b]
3312292ms th_a main.cpp:165 main ] Started witness node on a chain with 0 blocks.
3312292ms th_a main.cpp:166 main ] Chain ID is cefacd8adb8bee2bf3b757e882d2828297ceb67b1882982cbde882688ecb46a8
3312395ms ntp ntp.cpp:147 read_loop ] received ntp reply from 172.82.134.52:123
3312395ms ntp ntp.cpp:161 read_loop ] ntp offset: 1679989, round_trip_delay 106243
3312395ms ntp ntp.cpp:177 read_loop ] ntp_delta_time updated to 1679989
Any thoughts ?
Note: I am doing this in between family time, hence the delay on the postings
# ID of witness controlled by this node (e.g. "1.6.0", quotes are required, may specify multiple times)
witness-id = "1.6.1435"
# Tuple of [PublicKey, WIF private key] (may specify multiple times)
private-key = ["GPHxxxxxxxxxx", "privatexxxxxxx"]
./witness_node --rpc-endpoint 127.0.0.1:8090 --enable-stale-production -w '"1.6.0"' '"1.6.1"' '"1.6.2"' '"1.6.3"' '"1.6.4"' '"1.6.5"' '"1.6.6"' '"1.6.7"' '"1.6.8"' '"1.6.9"'
import_balance in.abit [5K58****dGSz,5KSE****EHJd,5HuJ****uzj7,5KPR****zmq8,5KUn****kzno,5Jtc****D4R4,5Jus****EpCH,5JNv****KfWz] true
10 assert_exception: Assert Exception
priv_key: Invalid Private Key
{"key":"5Jtc****D4R4"}
th_a wallet.cpp:2710 import_balance
{"name_or_id":"in.abit"}
th_a wallet.cpp:2757 import_balance
It works if I remove that key from the array. It also work if I import that key alone.I guess that array parameters have been a new addition and xeroc has not been able to catch up on his document.
./witness_node --rpc-endpoint "127.0.0.1:8090" --genesis-json aug-14-test-genesis.json -d test_net -s 104.200.28.117:61705 --enable-stale-production -w '"1.6.5155"'
I have my key, private key in the config.iniWell next issue my key / private are not loading, they are different ones from the config, and getting a parsing error as parameters
./witness_node --rpc-endpoint "127.0.0.1:8090" --genesis-json aug-14-test-genesis.json -s 104.200.28.117:61705 --witness-id '"1.6.5155"' --private-key '["GP...................................oy", "5KW.......................P8a"]'
726136ms th_a witness.cpp:88 plugin_initialize ] 10 assert_exception: Assert Exception
base58str.substr( 0, prefix_len ) == prefix:
{"base58str":"5J9YY***********************************wVCN"}
th_a types.cpp:54 public_key_type
726144ms th_a main.cpp:173 main ] Exiting with error:
10 assert_exception: Assert Exception
base58str.substr( 0, prefix_len ) == prefix:
{"base58str":"5J9YY***********************************wVCN"}
th_a types.cpp:54 public_key_type
rethrow
{}
th_a witness.cpp:88 plugin_initialize
unlocked >>> get_witness in.abit
{
"id": "1.6.5156",
"witness_account": "1.2.38793",
"signing_key": "GPH65XNUxWdYGqGyW9NtXdRpNntumLYT1cJ7CNE7F78Pwxrnx6cbV",
"next_secret_hash": "d210b0644edfcee411f00058dd862279402c61b4",
"previous_secret": "0000000000000000000000000000000000000000",
"vote_id": "1:5284",
"total_votes": 2389598829,
"url": "url-to-proposal"
}
After I finally got it right it hung for five to ten minutes.
It might be a recent transaction that wasn't on the snapshot.solved. stupid me.. :-[
Edit: Just noticed you said all the 5Js so it is a pattern... Ill check when my witness finish synching loading, and see if I have a 5j to test.
1748966ms th_a application.cpp:342 handle_block ] Got block #104467 from network
1749534ms th_a application.cpp:342 handle_block ] Got block #104468 from network
1757502ms th_a application.cpp:342 handle_block ] Got block #104469 from network
1759976ms th_a application.cpp:342 handle_block ] Got block #104475 from network
1759977ms th_a application.cpp:364 handle_block ] Error when pushing block:
10 assert_exception: Assert Exception
itr != _index.get<block_id>().end():
{}
th_a fork_database.cpp:51 push_block
{"new_block":{"previous":"0001981a644a3dddd4e2c7db84bb49f6fec77355","timestamp":"2015-08-15T23:29:16","witness":"1.6.5","next_secr
et_hash":"b3a177eb67d1e7b782e54e1ad216b87b79dc440d","previous_secret":"933e421c25e571c2fc9bf5fab56f5140cfb5f753","transaction_merkle_r
oot":"0000000000000000000000000000000000000000","extensions":[],"witness_signature":"1f3e0a2d9b6d6ee76e3d90c77eb1680380c4cf3fe87646089
e83bd7f8446a8c79878c2a81e98329ceb3c753b958bdde66ef4f147c8f7d254bc8774825810dedda5","transactions":[]}}
th_a db_block.cpp:173 _push_block
Why 104470~104474 missed?2088883ms th_a application.cpp:264 startup ] Detected unclean shutdown. Replaying blockchain...
2088883ms th_a application.cpp:227 operator() ] Initializing database...
2099945ms th_a db_management.cpp:67 wipe ] Wiping database
2099968ms th_a object_database.cpp:82 wipe ] Wiping object_database.
I am currently on #105026, so you should be on the same.Can you post your node's IP/port so that others can add it to peers list?
45.55.6.216:42317 is one of my nodes.I am currently on #105026, so you should be on the same.Can you post your node's IP/port so that others can add it to peers list?
3299997ms th_a witness.cpp:239 block_production_loo ] slot: 1 scheduled_witness: 1.6.1 scheduled_time: 2015-08-16T08:55:00 now: 2015-08-16T08:55:00
3300134ms th_a application.cpp:342 handle_block ] Got block #134274 from network
3300436ms th_a application.cpp:342 handle_block ] Got block #134274 from network
3300538ms th_a application.cpp:437 get_item ] Request for item {"item_type":1001,"item_hash":"84dc9853e4ddd3941281f6260886aef9713829f0"}
3300997ms th_a witness.cpp:239 block_production_loo ] slot: 0 scheduled_witness: 1.6.0 scheduled_time: 1970-01-01T00:00:00 now: 2015-08-16T08:55:01
3300998ms th_a witness.cpp:207 operator() ] Not producing block because head block time is in the future (is the system clock set correctly?).
3301997ms th_a witness.cpp:239 block_production_loo ] slot: 0 scheduled_witness: 1.6.0 scheduled_time: 1970-01-01T00:00:00 now: 2015-08-16T08:55:02
3301998ms th_a witness.cpp:207 operator() ] Not producing block because head block time is in the future (is the system clock set correctly?).
3302997ms th_a witness.cpp:239 block_production_loo ] slot: 0 scheduled_witness: 1.6.0 scheduled_time: 1970-01-01T00:00:00 now: 2015-08-16T08:55:03
3302998ms th_a witness.cpp:207 operator() ] Not producing block because head block time is in the future (is the system clock set correctly?).
3303997ms th_a witness.cpp:239 block_production_loo ] slot: 1 scheduled_witness: 1.6.1439 scheduled_time: 2015-08-16T08:55:04 now: 2015-08-16T08:55:04
I'm still not in sync :(
Every time ctrl+c kill witness and restart, it wipes everything, then resync slowly, then stuck somewhere.Code: [Select]2088883ms th_a application.cpp:264 startup ] Detected unclean shutdown. Replaying blockchain...
2088883ms th_a application.cpp:227 operator() ] Initializing database...
2099945ms th_a db_management.cpp:67 wipe ] Wiping database
2099968ms th_a object_database.cpp:82 wipe ] Wiping object_database.
100% CPU while wiping. Is it quicker to just remove the object_database directory and the witness_data directory before restart?
Whats going on here?Code: [Select]3299997ms th_a witness.cpp:239 block_production_loo ] slot: 1 scheduled_witness: 1.6.1 scheduled_time: 2015-08-16T08:55:00 now: 2015-08-16T08:55:00
3300134ms th_a application.cpp:342 handle_block ] Got block #134274 from network
3300436ms th_a application.cpp:342 handle_block ] Got block #134274 from network
3300538ms th_a application.cpp:437 get_item ] Request for item {"item_type":1001,"item_hash":"84dc9853e4ddd3941281f6260886aef9713829f0"}
3300997ms th_a witness.cpp:239 block_production_loo ] slot: 0 scheduled_witness: 1.6.0 scheduled_time: 1970-01-01T00:00:00 now: 2015-08-16T08:55:01
3300998ms th_a witness.cpp:207 operator() ] Not producing block because head block time is in the future (is the system clock set correctly?).
3301997ms th_a witness.cpp:239 block_production_loo ] slot: 0 scheduled_witness: 1.6.0 scheduled_time: 1970-01-01T00:00:00 now: 2015-08-16T08:55:02
3301998ms th_a witness.cpp:207 operator() ] Not producing block because head block time is in the future (is the system clock set correctly?).
3302997ms th_a witness.cpp:239 block_production_loo ] slot: 0 scheduled_witness: 1.6.0 scheduled_time: 1970-01-01T00:00:00 now: 2015-08-16T08:55:03
3302998ms th_a witness.cpp:207 operator() ] Not producing block because head block time is in the future (is the system clock set correctly?).
3303997ms th_a witness.cpp:239 block_production_loo ] slot: 1 scheduled_witness: 1.6.1439 scheduled_time: 2015-08-16T08:55:04 now: 2015-08-16T08:55:04
I was seeing the same thing. Restarting the witness seemed to fix it though.Whats going on here?Code: [Select]3299997ms th_a witness.cpp:239 block_production_loo ] slot: 1 scheduled_witness: 1.6.1 scheduled_time: 2015-08-16T08:55:00 now: 2015-08-16T08:55:00
3300134ms th_a application.cpp:342 handle_block ] Got block #134274 from network
3300436ms th_a application.cpp:342 handle_block ] Got block #134274 from network
3300538ms th_a application.cpp:437 get_item ] Request for item {"item_type":1001,"item_hash":"84dc9853e4ddd3941281f6260886aef9713829f0"}
3300997ms th_a witness.cpp:239 block_production_loo ] slot: 0 scheduled_witness: 1.6.0 scheduled_time: 1970-01-01T00:00:00 now: 2015-08-16T08:55:01
3300998ms th_a witness.cpp:207 operator() ] Not producing block because head block time is in the future (is the system clock set correctly?).
3301997ms th_a witness.cpp:239 block_production_loo ] slot: 0 scheduled_witness: 1.6.0 scheduled_time: 1970-01-01T00:00:00 now: 2015-08-16T08:55:02
3301998ms th_a witness.cpp:207 operator() ] Not producing block because head block time is in the future (is the system clock set correctly?).
3302997ms th_a witness.cpp:239 block_production_loo ] slot: 0 scheduled_witness: 1.6.0 scheduled_time: 1970-01-01T00:00:00 now: 2015-08-16T08:55:03
3302998ms th_a witness.cpp:207 operator() ] Not producing block because head block time is in the future (is the system clock set correctly?).
3303997ms th_a witness.cpp:239 block_production_loo ] slot: 1 scheduled_witness: 1.6.1439 scheduled_time: 2015-08-16T08:55:04 now: 2015-08-16T08:55:04
I have the same issues every so often, and I have ntp running
708971ms th_a application.cpp:486 get_blockchain_synop ] reference_point: 0000000000000000000000000000000000000000 number_of_blocks_after_reference_point: 0 result: ["00003f86b4220fb29e16036f5d696bb5243f9a07","00013f8685820e64a33f3b87f8cf6474b0e7e5c7","0001bf8635d391238db9e93931aae721abd53bab","0001ff868e9aec5f1f14cfb66e5f49fb0280329e","00021f86654dc5d0c4b92583d55fda1c9f7dd82f","00022f869f1a17f045c6952b2e0f449024df7742","0002378624f2f684ed05d697aea7aff3d0b3513a","00023b8665b1af0e5fd6785b618446d70b7b7c16","00023d8649c554e0c149ee44b4ea746d1ace0d85","00023e86432bacf440402bfc0cc09ef1918400f4","00023f0670f542d7bb358bc620100fd3120d87c2","00023f464af6c15db54edeb71b84fb9ced19a473","00023f66bd846878966710660b72c7f1ddfe0412","00023f762feec6a58f31a1fbf74f2f6722cd1235","00023f7e66d776c07eb3ef892981761da11c5dbb","00023f8250a4aa2c2143a226b50074a05d66a788","00023f8477a53510d63dca1dd1de05a67d251d2d","00023f853fe584f1891a6f246f2ed7238a6bcabd","00023f868da02c32c1d8c6faef53b4a0c51ade92"]
Then the syncing progress stopped with no more blocks 1130033ms p2p tcp_socket.cpp:162 bind ] Exception binding outgoing connection to desired local endpoint: bind: Address already in use
with no ports besides NTP and SSH in use .. no idea what port the client wants to bind ..
Thanks. It seems my node connected to 45.55.6.216:59189.45.55.6.216:42317 is one of my nodes.I am currently on #105026, so you should be on the same.Can you post your node's IP/port so that others can add it to peers list?
3315889ms th_a application.cpp:330 handle_block ] Got block #147334 from network
862723ms ntp ntp.cpp:77 request_now ] resolving... ["pool.ntp.org",123]
862729ms ntp ntp.cpp:81 request_now ] sending request to 129.6.15.30:123
1162730ms ntp ntp.cpp:77 request_now ] resolving... ["pool.ntp.org",123]
1162736ms ntp ntp.cpp:81 request_now ] sending request to 69.50.219.51:123
1162765ms ntp ntp.cpp:147 read_loop ] received ntp reply from 69.50.219.51:123
1162765ms ntp ntp.cpp:161 read_loop ] ntp offset: 1256929, round_trip_delay 28530
1162765ms ntp ntp.cpp:177 read_loop ] ntp_delta_time updated to 1256929
1568000ms th_a witness.cpp:239 block_production_loo ] slot: 26417 schedu
led_witness: 1.6.1446 scheduled_time: 2015-08-16T16:26:08 now: 2015-08-16T16:26:08
1568001ms th_a witness.cpp:242 block_production_loo ] Witness 1.6.1446 p
roduction slot has arrived; generating a block now...
1568003ms th_a db_block.cpp:167 _push_block ] Failed to push new
block:
3070000 undo_database_exception: undo database exception
The database does not have enough undo history to support a blockchain with so many missed b
locks. Please add a checkpoint if you would like to continue applying blocks beyond this poi
nt.
{"recently_missed":53819,"max_undo":1000}
th_a db_update.cpp:68 update_global_dynamic_data
{"next_block.block_num()":134420}
th_a db_block.cpp:448 _apply_block
1568003ms th_a witness.cpp:265 block_production_loo ] Got exception whil
e generating block:
...
I got this error on my non-witness nodeI'm getting the same error on my delegate node.Code: [Select]3315889ms th_a application.cpp:330 handle_block ] Got block #147334 from network
862723ms ntp ntp.cpp:77 request_now ] resolving... ["pool.ntp.org",123]
862729ms ntp ntp.cpp:81 request_now ] sending request to 129.6.15.30:123
1162730ms ntp ntp.cpp:77 request_now ] resolving... ["pool.ntp.org",123]
1162736ms ntp ntp.cpp:81 request_now ] sending request to 69.50.219.51:123
1162765ms ntp ntp.cpp:147 read_loop ] received ntp reply from 69.50.219.51:123
1162765ms ntp ntp.cpp:161 read_loop ] ntp offset: 1256929, round_trip_delay 28530
1162765ms ntp ntp.cpp:177 read_loop ] ntp_delta_time updated to 1256929
And in my another node (with active witness) I got the same error messagesCode: [Select]1568000ms th_a witness.cpp:239 block_production_loo ] slot: 26417 schedu
led_witness: 1.6.1446 scheduled_time: 2015-08-16T16:26:08 now: 2015-08-16T16:26:08
1568001ms th_a witness.cpp:242 block_production_loo ] Witness 1.6.1446 p
roduction slot has arrived; generating a block now...
1568003ms th_a db_block.cpp:167 _push_block ] Failed to push new
block:
3070000 undo_database_exception: undo database exception
The database does not have enough undo history to support a blockchain with so many missed b
locks. Please add a checkpoint if you would like to continue applying blocks beyond this poi
nt.
{"recently_missed":53819,"max_undo":1000}
th_a db_update.cpp:68 update_global_dynamic_data
{"next_block.block_num()":134420}
th_a db_block.cpp:448 _apply_block
1568003ms th_a witness.cpp:265 block_production_loo ] Got exception whil
e generating block:
...
74002ms th_a witness.cpp:191 operator() ] Not producing block because production is disabled.
775002ms th_a witness.cpp:239 block_production_loo ] slot: 11858 scheduled_witness: 1.6.0 scheduled_time: 2015-08-16T17:12:55 now: 2015-08-16T17:12:55
775002ms th_a witness.cpp:191 operator() ] Not producing block because production is disabled.
776002ms th_a witness.cpp:239 block_production_loo ] slot: 11859 scheduled_witness: 1.6.2 scheduled_time: 2015-08-16T17:12:56 now: 2015-08-16T17:12:56
776002ms th_a witness.cpp:191 operator() ] Not producing block because production is disabled.
777002ms th_a witness.cpp:239 block_production_loo ] slot: 11860 scheduled_witness: 1.6.1435 scheduled_time: 2015-08-16T17:12:57 now: 2015-08-16T17:12:57
777002ms th_a witness.cpp:191 operator() ] Not producing block because production is disabled.
info
{
"head_block_num": 134419,
"head_block_id": "00020d137298dab08e3147893e61dfed75ad7d0e",
"head_block_age": "8 hours old",
"next_maintenance_time": "8 hours ago",
"chain_id": "081401ede64c8fe30b23c91d7ab8750103acb1a39548a866fb562f2edf4627d6",
"active_witnesses": [
"1.6.0",
"1.6.1",
"1.6.2",
"1.6.3",
"1.6.4",
"1.6.1435",
"1.6.1439",
"1.6.1446",
"1.6.5155",
"1.6.5156"
],
"active_committee_members": [
"1.5.0",
"1.5.1",
"1.5.2",
"1.5.3",
"1.5.4",
"1.5.5",
"1.5.6",
"1.5.7",
"1.5.8",
"1.5.9"
],
"entropy": "e8f56f18210fc56faa891342f94cbade418c834f"
}
204.44.115.139:61705
get_witness delegate.ihashfury
{
"id": "1.6.1504",
{
"head_block_num": 147334,
"head_block_id": "00023f868da02c32c1d8c6faef53b4a0c51ade92",
"head_block_age": "4 hours old",
"next_maintenance_time": "4 hours ago",
"chain_id": "081401ede64c8fe30b23c91d7ab8750103acb1a39548a866fb562f2edf4627d6]
get_dynamic_global_properties
{
"id": "2.1.0",
"random": "cac65461d8694c0c44c9c0909041bb8d15121e79",
"head_block_number": 147334,
"head_block_id": "00023f868da02c32c1d8c6faef53b4a0c51ade92",
"time": "2015-08-16T13:55:17",
"current_witness": "1.6.1",
"next_maintenance_time": "2015-08-16T14:00:00",
"witness_budget": 112918612,
"accounts_registered_this_interval": 0,
"recently_missed_count": 998,
"dynamic_flags": 0
}
Node up -You can vote it in.Code: [Select]204.44.115.139:61705
with witnessCode: [Select]get_witness delegate.ihashfury
{
"id": "1.6.1504",
I think it ready to produce blocks but not voted in yet
1784002ms th_a witness.cpp:191 operator() ] Not producing block because production is disabled.
1785002ms th_a witness.cpp:239 block_production_loo ] slot: 16468 scheduled_witness: 1.6.0 scheduled_time: 2015-08-16T18:29:45 now: 2015-08-16T18:29:45
1785002ms th_a witness.cpp:191 operator() ] Not producing block because production is disabled.
1786001ms th_a witness.cpp:239 block_production_loo ] slot: 16469 scheduled_witness: 1.6.2 scheduled_time: 2015-08-16T18:29:46 now: 2015-08-16T18:29:46
1786002ms th_a witness.cpp:191 operator() ] Not producing block because production is disabled.
1787002ms th_a witness.cpp:239 block_production_loo ] slot: 16470 scheduled_witness: 1.6.1435 scheduled_time: 2015-08-16T18:29:47 now: 2015-08-16T18:29:47
1787002ms th_a witness.cpp:191 operator() ] Not producing block because production is disabled.
1788001ms th_a witness.cpp:239 block_production_loo ] slot: 16471 scheduled_witness: 1.6.5155 scheduled_time: 2015-08-16T18:29:48 now: 2015-08-16T18:29:48
1788002ms th_a witness.cpp:191 operator() ] Not producing block because production is disabled.
My node is stuck at 134419 as well.. is it on the right fork?
I'm not running with --enable-stale-production though..Code: [Select]info
{
"head_block_num": 134419,
"head_block_id": "00020d137298dab08e3147893e61dfed75ad7d0e",
"head_block_age": "8 hours old",
"next_maintenance_time": "8 hours ago",
"chain_id": "081401ede64c8fe30b23c91d7ab8750103acb1a39548a866fb562f2edf4627d6",
"active_witnesses": [
"1.6.0",
"1.6.1",
"1.6.2",
"1.6.3",
"1.6.4",
"1.6.1435",
"1.6.1439",
"1.6.1446",
"1.6.5155",
"1.6.5156"
],
"active_committee_members": [
"1.5.0",
"1.5.1",
"1.5.2",
"1.5.3",
"1.5.4",
"1.5.5",
"1.5.6",
"1.5.7",
"1.5.8",
"1.5.9"
],
"entropy": "e8f56f18210fc56faa891342f94cbade418c834f"
}
Edit: I'll try to restart with BM's seed node only
2015-08-16T18:06:34 p2p:message read_loop process_block_during ] received a sync block from peer 104.200.28.117:61705
node.cpp:3073
2015-08-16T18:06:34 p2p:process_backlog_of_sync_blocks process_backlog_of_s ] in process_backlog_of_sync_blocks node.c
pp:2959
2015-08-16T18:06:34 p2p:process_backlog_of_sync_blocks process_backlog_of_s ] currently 1 blocks in the process of being handled
node.cpp:2965
2015-08-16T18:06:34 p2p:process_backlog_of_sync_blocks process_backlog_of_s ] currently 1 sync items to consider
node.cpp:2995
2015-08-16T18:06:34 p2p:process_backlog_of_sync_blocks process_backlog_of_s ] currently 0 sync items to consider
node.cpp:2995
2015-08-16T18:06:34 p2p:process_backlog_of_sync_blocks process_backlog_of_s ] leaving process_backlog_of_sync_blocks, 1 processed
node.cpp:3056
2015-08-16T18:06:34 p2p:process_backlog_of_sync_blocks trigger_fetch_sync_i ] Triggering fetch sync items loop now
node.cpp:1081
2015-08-16T18:06:34 p2p:fetch_sync_items_loop fetch_sync_items_loo ] beginning another iteration of the sync items loop
node.cpp:1020
2015-08-16T18:06:34 p2p:fetch_sync_items_loop fetch_sync_items_loo ] no sync items to fetch right now, going to sleep
node.cpp:1070
2015-08-16T18:06:34 p2p:send_sync_block_to_node_delegate send_sync_block_to_n ] in send_sync_block_to_node_delegate()
node.cpp:2777
2015-08-16T18:06:34 p2p:send_sync_block_to_node_delegate send_sync_block_to_n ] Failed to push sync block 134318 (id:00020cae30db6d247c1b3e356a55e93808e1290d): client rejected sync block sent by peer: {"code":10,"name":"assert_exception","message":"Assert Exception","stack":[{"context":{"level":"error","file":"fork_database.cpp","line":51,"method":"push_block","hostname":"","thread_name":"th_a","timestamp":"2015-08-16T18:06:34"},"format":"itr != _index.get<block_id>().end(): ","data":{}},{"context":{"level":"warn","file":"db_block.cpp","line":173,"method":"_push_block","hostname":"","thread_name":"th_a","timestamp":"2015-08-16T18:06:34"},"format":"","data":{"new_block":{"previous":"00020cade4b2e23a805d33e5585cd501b327efd5","timestamp":"2015-08-16T08:55:51","witness":"1.6.1435","next_secret_hash":"852c62c5a7e927cfff752276b543b1731a58b42f","previous_secret":"ed9b9f8a4624bbb6eb2dba63336b2643a5b5561b","transaction_merkle_root":"0000000000000000000000000000000000000000","extensions":[],"witness_signature":"1f06bab5ea47138f12a31f15fb82962c87d5679b3e176002b78422f6143301870676208b51eb11bf4320a05282171e95f5c33992e3e4034e63eaf1b57f087a8827","transactions":[]}}},{"context":{"level":"warn","file":"application.cpp","line":373,"method":"handle_block","hostname":"","thread_name":"th_a","timestamp":"2015-08-16T18:06:34"},"format":"","data":{"blk_msg":{"block":{"previous":"00020cade4b2e23a805d33e5585cd501b327efd5","timestamp":"2015-08-16T08:55:51","witness":"1.6.1435","next_secret_hash":"852c62c5a7e927cfff752276b543b1731a58b42f","previous_secret":"ed9b9f8a4624bbb6eb2dba63336b2643a5b5561b","transaction_merkle_root":"0000000000000000000000000000000000000000","extensions":[],"witness_signature":"1f06bab5ea47138f12a31f15fb82962c87d5679b3e176002b78422f6143301870676208b51eb11bf4320a05282171e95f5c33992e3e4034e63eaf1b57f087a8827","transactions":[]},"block_id":"00020cae30db6d247c1b3e356a55e93808e1290d"},"sync_mode":true}}]} node.cpp:2813
2015-08-16T18:06:34 p2p:send_sync_block_to_node_delegate send_sync_block_to_n ] disconnecting client 104.200.28.117:61705 because it offered us the rejected block node.cpp:2928
2015-08-16T18:06:34 p2p:send_sync_block_to_node_delegate send_message ] peer_connection::send_message() enqueueing message of type 5011 for peer 104.200.28.117:61705 peer_connection.cpp:365
2015-08-16T18:06:34 p2p:send_sync_block_to_node_delegate send_queueable_messa ] peer_connection::send_message() is firing up send_queued_message_task peer_connection.cpp:354
2015-08-16T18:06:34 p2p:send_queued_messages_task counter ] entering peer_connection::send_queued_messages_task()
peer_connection.cpp:279
2015-08-16T18:06:34 p2p:send_queued_messages_task send_queued_messages ] peer_connection::send_queued_messages_task() calling message_oriented_connection::send_message() to send message of type 5011 for peer 104.200.28.117:61705 peer_connection.cpp:291
... received a sync block from peer 104.200.28.117:61705 ...
... block 134318 (id:00020cae30db6d247c1b3e356a55e93808e1290d) ...
... "timestamp":"2015-08-16T18:06:34" ...
{"previous":"00020cade4b2e23a805d33e5585cd501b327efd5","timestamp":"2015-08-16T08:55:51"
...
Yep - I have voted but only had a small balance
Lots of cut, paste and import balance keys :P
I still can't get past 147334. Anyone have a higher blockhead?
if anyone wants to keep testing, I have started a new chain. Just change the seed node to 45.55.6.216:1776Or set a different data directory. Re-syncing.
You can use the same wallet, but I am assuming you will need to import_balance again.
oh. and delete the chain folder or launch with --resync-blockchain
locked >>> info
info
{
"head_block_num": 147334,
"head_block_id": "00023f868da02c32c1d8c6faef53b4a0c51ade92",
"head_block_age": "23 hours old",
"next_maintenance_time": "22 hours ago",
"chain_id": "081401ede64c8fe30b23c91d7ab8750103acb1a39548a866fb562f2edf4627d6",
"active_witnesses": [
"1.6.0",
"1.6.1",
"1.6.2",
"1.6.3",
"1.6.4",
"1.6.1435",
"1.6.1439",
"1.6.1446",
"1.6.5155",
"1.6.5156"
],
"active_committee_members": [
"1.5.0",
"1.5.1",
"1.5.2",
"1.5.3",
"1.5.4",
"1.5.5",
"1.5.6",
"1.5.7",
"1.5.8",
"1.5.9"
],
"entropy": "cac65461d8694c0c44c9c0909041bb8d15121e79"
}
I have looked into why this test network died and it was due to low witness participation... test witnesses joined, got elected, and failed to produce blocks! >:( bad, bad test witnesses! :PCan we have the next blockchain start with N=101 witnesses? that way, we can gradually remove the initial delegates that are supposed to be more reliable, aren't they?
Under Graphene we have some "safety" features that may be overly strict in the context of a test network.
The idea is that the nodes will increase the amount of undo history they track by 2 every time a block is missed and decrease it by 1 every time a block is produced. The nodes are configured to maintain a maximum undo history and once that history has been reached no new blocks may be pushed without a checkpoint to clear the history. Since no one was around to produce a checkpoint block production simply stopped.
In a production environment we would expect that witness participation rate shouldn't fall below 66% for very long and if it did then all of the witnesses would be actively monitoring and repairing the network by setting a checkpoint.
With 1 second blocks it probably didn't take much wall-clock time for hit the limit.
Note: the reason for this limit is to make sure that no blockchain can exist where a new node could get "stuck" on a dead branch and unable to automatically rejoin the main network.
Whats going on here?Code: [Select]3299997ms th_a witness.cpp:239 block_production_loo ] slot: 1 scheduled_witness: 1.6.1 scheduled_time: 2015-08-16T08:55:00 now: 2015-08-16T08:55:00
3300134ms th_a application.cpp:342 handle_block ] Got block #134274 from network
3300436ms th_a application.cpp:342 handle_block ] Got block #134274 from network
3300538ms th_a application.cpp:437 get_item ] Request for item {"item_type":1001,"item_hash":"84dc9853e4ddd3941281f6260886aef9713829f0"}
3300997ms th_a witness.cpp:239 block_production_loo ] slot: 0 scheduled_witness: 1.6.0 scheduled_time: 1970-01-01T00:00:00 now: 2015-08-16T08:55:01
3300998ms th_a witness.cpp:207 operator() ] Not producing block because head block time is in the future (is the system clock set correctly?).
3301997ms th_a witness.cpp:239 block_production_loo ] slot: 0 scheduled_witness: 1.6.0 scheduled_time: 1970-01-01T00:00:00 now: 2015-08-16T08:55:02
3301998ms th_a witness.cpp:207 operator() ] Not producing block because head block time is in the future (is the system clock set correctly?).
3302997ms th_a witness.cpp:239 block_production_loo ] slot: 0 scheduled_witness: 1.6.0 scheduled_time: 1970-01-01T00:00:00 now: 2015-08-16T08:55:03
3302998ms th_a witness.cpp:207 operator() ] Not producing block because head block time is in the future (is the system clock set correctly?).
3303997ms th_a witness.cpp:239 block_production_loo ] slot: 1 scheduled_witness: 1.6.1439 scheduled_time: 2015-08-16T08:55:04 now: 2015-08-16T08:55:04
I have the same issues every so often, and I have ntp running
For now, the dev team can't even publish a testnet for general users,how low quality they are,no wonder the price act like shit.
Who of you can recall bts testnets 1 through 12?
[emoji14]
Who of you can recall bts testnets 1 through 12?
[emoji14]
@bytemaster So you meant all of five test witnesses were missing blocks?
My test witness missed all blocks since I haven't enabled block production.. ;)
My test witness missed all blocks since I haven't enabled block production.. ;)
My test witness missed all blocks since I haven't enabled block production.. ;)
My test witness missed all blocks since I haven't enabled block production.. ;)
I may have done the same but I don't think I was voted in.
How do you enable block production?
My test witness missed all blocks since I haven't enabled block production.. ;)
I may have done the same but I don't think I was voted in.
How do you enable block production?
Nothing is needed beyond launching the witness_node with the witness ID and public private key pair. You can also add them to the config.ini if you prefer.
I was producing blocks, but every so often I had the ntp error.
Edit.. amazon option has more ramFWIW -Looking at my Azure VPS for Graphine, I have the G5 machine size option: 32 cores, 448GB RAM [1]
Edit.. amazon option has more ramFWIW -Looking at my Azure VPS for Graphine, I have the G5 machine size option: 32 cores, 448GB RAM [1]
No, I'm not currently running this size during testing.
[1] http://azure.microsoft.com/en-us/pricing/details/virtual-machines/#Linux
Not all locations have the G series instances available.
So what is the state of things at this point? I'm 90% finished rebuilding fresh from src (I presume there have been many changes since 8/9) on 2 systems.
Are you guys using the procedure in the readme for "Running private testnet", or xeroc's process for importing balances from 0.9.2?
I was producing blocks, but every so often I had the ntp error.
I have seen that NTP error from time to time myself. We will look into it.
Vikram and I have reviewed the testnet and have identified a patch that would allow us to revive the existing test net by specifying a checkpoint. Once we are sure we can revive the testnet we will start a new network with at least 100 witness slots so that it is less likely for a few bad apples to result in this issue.
It could be that you produced blocks on a fork. If any of you have evidence that you were on a fork (have a block that wasn't included in the main chain) then that is something I am very interested in.
So what is the state of things at this point? I'm 90% finished rebuilding fresh from src (I presume there have been many changes since 8/9) on 2 systems.
Are you guys using the procedure in the readme for "Running private testnet", or xeroc's process for importing balances from 0.9.2?
Xeroc's with the bytemaster's start up parameters, see my post.
programs/witness_node/witness_node -s 104.200.28.117:61705 --rpc-endpoint 127.0.0.1:8090 --genesis-json aug-14-test-genesis.json
Download the genesis Json in the op, and follow the commands to launch witness node and cli wallet. To import your delegate I would suggest following the link about becoming a delegate. His python script is especially useful.So what is the state of things at this point? I'm 90% finished rebuilding fresh from src (I presume there have been many changes since 8/9) on 2 systems.
Are you guys using the procedure in the readme for "Running private testnet", or xeroc's process for importing balances from 0.9.2?
Xeroc's with the bytemaster's start up parameters, see my post.
I've been monitoring this thread all day, but I'm not sure which of your posts you're referring to, or what you mean "bytemaster's parameters" .
When I tried this on the 9th I was unable to import a balance or register a witness. I am starting to try that now. According to xeroc's instructions under git/graphene/docs, he says to start the witness with a downloaded genesis block (https://drive.google.com/open?id=0B_GVo0GoC_v_S3lPOWlUbFJFWTQ):Code: [Select]programs/witness_node/witness_node -s 104.200.28.117:61705 --rpc-endpoint 127.0.0.1:8090 --genesis-json aug-14-test-genesis.json
Is this still the correct IP:port for the seed node and genesis block to use?
I was producing blocks, but every so often I had the ntp error.
I have seen that NTP error from time to time myself. We will look into it.
Vikram and I have reviewed the testnet and have identified a patch that would allow us to revive the existing test net by specifying a checkpoint. Once we are sure we can revive the testnet we will start a new network with at least 100 witness slots so that it is less likely for a few bad apples to result in this issue.
It could be that you produced blocks on a fork. If any of you have evidence that you were on a fork (have a block that wasn't included in the main chain) then that is something I am very interested in.
deletech@Jessie:~/bts2.0$ ./cli_wallet -s ws://127.0.0.1:8090
Logging RPC to file: logs/rpc/rpc.log
3273995ms th_a main.cpp:111 main ] key_to_wif( committee_private_key ): 5KCBDTcyDqzsqehcb52tW5nU6pXife6V2rX9Yf7c3saYSzbDZ5W
3273995ms th_a main.cpp:115 main ] nathan_pub_key: GPH6MRyAjQq8ud7hVNYcfnVPJqcVpscN5So8BhtHuGYqET5GDW5CV
3273995ms th_a main.cpp:116 main ] key_to_wif( nathan_private_key ): 5KQwrPbwdL6PhXujxW37FSSQZ1JiwsST4cqQzDeyXtP79zkvFD3
Starting a new wallet with chain ID 3fda83797955b6d7c3916a08bab36f9a86169c23298a5f519e48dff1d4e6475a (from egenesis)
3273995ms th_a main.cpp:163 main ] wdata.ws_server: ws://127.0.0.1:8090
3273997ms th_a main.cpp:168 main ] wdata.ws_user: wdata.ws_password:
0 exception: unspecified
Remote server gave us an unexpected chain_id
{"remote_chain_id":"a629fd737e0b4d8becc25e33b90984840723b38cd4430049693059f6e396b89c","chain_id":"3fda83797955b6d7c3916a08bab36f9a86169c23298a5f519e48dff1d4e6475a"}
th_a wallet.cpp:375 wallet_api_impl
Thanks puppies.
I'll go back to the OP here, thought was was really old & outdated by now, but I guess not.
I'll use the 45.55.6.216:1776 seed node you mentioned.
I started the witness on the unedited config.ini file with the seed from xeroc's write up. It runs, but the cli crashes:Code: [Select]deletech@Jessie:~/bts2.0$ ./cli_wallet -s ws://127.0.0.1:8090
Logging RPC to file: logs/rpc/rpc.log
3273995ms th_a main.cpp:111 main ] key_to_wif( committee_private_key ): 5KCBDTcyDqzsqehcb52tW5nU6pXife6V2rX9Yf7c3saYSzbDZ5W
3273995ms th_a main.cpp:115 main ] nathan_pub_key: GPH6MRyAjQq8ud7hVNYcfnVPJqcVpscN5So8BhtHuGYqET5GDW5CV
3273995ms th_a main.cpp:116 main ] key_to_wif( nathan_private_key ): 5KQwrPbwdL6PhXujxW37FSSQZ1JiwsST4cqQzDeyXtP79zkvFD3
Starting a new wallet with chain ID 3fda83797955b6d7c3916a08bab36f9a86169c23298a5f519e48dff1d4e6475a (from egenesis)
3273995ms th_a main.cpp:163 main ] wdata.ws_server: ws://127.0.0.1:8090
3273997ms th_a main.cpp:168 main ] wdata.ws_user: wdata.ws_password:
0 exception: unspecified
Remote server gave us an unexpected chain_id
{"remote_chain_id":"a629fd737e0b4d8becc25e33b90984840723b38cd4430049693059f6e396b89c","chain_id":"3fda83797955b6d7c3916a08bab36f9a86169c23298a5f519e48dff1d4e6475a"}
th_a wallet.cpp:375 wallet_api_impl
That's probably expected. I review the OP and see if I can get up to speed...
./witness_node --rpc-endpoint "127.0.0.1:8090" --genesis-json ../Downloads/aug-14-test-genesis.json -d test_net -s 45.55.6.216:1776 --enable-stale-production -w \""1.6.0"\" \""1.6.1"\" \""1.6.2"\" \""1.6.3"\" \""1.6.4"\"
./cli_wallet -w wallet_name --chain-id 081401ede64c8fe30b23c91d7ab8750103acb1a39548a866fb562f2edf4627d6
Logging RPC to file: logs/rpc/rpc.log
1677766ms th_a main.cpp:111 main ] key_to_wif( committee_private_key ): 5KCBDTcyDqzsqehcb52tW5nU6pXife6V2rX9Yf7c3saYSzbDZ5W
1677766ms th_a main.cpp:115 main ] nathan_pub_key: GPH6MRyAjQq8ud7hVNYcfnVPJqcVpscN5So8BhtHuGYqET5GDW5CV
1677766ms th_a main.cpp:116 main ] key_to_wif( nathan_private_key ): 5KQwrPbwdL6PhXujxW37FSSQZ1JiwsST4cqQzDeyXtP79zkvFD3
Starting a new wallet with chain ID 081401ede64c8fe30b23c91d7ab8750103acb1a39548a866fb562f2edf4627d6 (from CLI)
1677766ms th_a main.cpp:163 main ] wdata.ws_server: ws://localhost:8090
1677769ms th_a main.cpp:168 main ] wdata.ws_user: wdata.ws_password:
0 exception: unspecified
Remote server gave us an unexpected chain_id
{"remote_chain_id":"a629fd737e0b4d8becc25e33b90984840723b38cd4430049693059f6e396b89c","chain_id":"081401ede64c8fe30b23c91d7ab8750103acb1a39548a866fb562f2edf4627d6"}
th_a wallet.cpp:375 wallet_api_impl
With the witness ran as I indicated in my last post I still can't get the cli to run using the info in the OP:Code: [Select]./cli_wallet -w wallet_name --chain-id 081401ede64c8fe30b23c91d7ab8750103acb1a39548a866fb562f2edf4627d6
Logging RPC to file: logs/rpc/rpc.log
1677766ms th_a main.cpp:111 main ] key_to_wif( committee_private_key ): 5KCBDTcyDqzsqehcb52tW5nU6pXife6V2rX9Yf7c3saYSzbDZ5W
1677766ms th_a main.cpp:115 main ] nathan_pub_key: GPH6MRyAjQq8ud7hVNYcfnVPJqcVpscN5So8BhtHuGYqET5GDW5CV
1677766ms th_a main.cpp:116 main ] key_to_wif( nathan_private_key ): 5KQwrPbwdL6PhXujxW37FSSQZ1JiwsST4cqQzDeyXtP79zkvFD3
Starting a new wallet with chain ID 081401ede64c8fe30b23c91d7ab8750103acb1a39548a866fb562f2edf4627d6 (from CLI)
1677766ms th_a main.cpp:163 main ] wdata.ws_server: ws://localhost:8090
1677769ms th_a main.cpp:168 main ] wdata.ws_user: wdata.ws_password:
0 exception: unspecified
Remote server gave us an unexpected chain_id
{"remote_chain_id":"a629fd737e0b4d8becc25e33b90984840723b38cd4430049693059f6e396b89c","chain_id":"081401ede64c8fe30b23c91d7ab8750103acb1a39548a866fb562f2edf4627d6"}
th_a wallet.cpp:375 wallet_api_impl
Does the chain-id in the OP still correct against your seed?
Using this to launch the witness node, wiped the _data_dir (there doesn't seem to be a wallet yet, probably b/c it is crashing):Code: [Select]./witness_node --rpc-endpoint "127.0.0.1:8090" --genesis-json ../Downloads/aug-14-test-genesis.json -d test_net -s 45.55.6.216:1776 --enable-stale-production -w \""1.6.0"\" \""1.6.1"\" \""1.6.2"\" \""1.6.3"\" \""1.6.4"\"
Is that correct? I'm have a little difficulty following the permutations between the instructions in the OP of this thread and xeroc's instructions for importing balances from 0.9.2. The OP doesn't appear to address importing from 0.9.2, which xeroc's instructions do. I'm just a bit confused by the mixture of accounts in this predefined genesis block and using balances imported from 0.9.2.
Not even sure if I should be trying to import a balance from 0.9.2. Is that what most of you testing now are doing or just using the balance from nathan's account mentioned in the OP?
#Endpoint for P2P node to listen on
p2p-endpoint = 0.0.0.0:1776
# P2P nodes to connect to on startup (may specify multiple times)
# seed-node =
# Pairs of [BLOCK_NUM,BLOCK_ID] that should be enforced as checkpoints.
# checkpoint =
# Endpoint for websocket RPC to listen on
rpc-endpoint = 127.0.0.1:8090
# Endpoint for TLS websocket RPC to listen on
# rpc-tls-endpoint =
# The TLS certificate file for this server
# server-pem =
# Password for this certificate # server-pem-password =
# File to read Genesis State from
genesis-json = aug-14-test-genesis.json
]Using this to launch the witness node, wiped the _data_dir (there doesn't seem to be a wallet yet, probably b/c it is crashing):Code: [Select]./witness_node --rpc-endpoint "127.0.0.1:8090" --genesis-json ../Downloads/aug-14-test-genesis.json -d test_net -s 45.55.6.216:1776 --enable-stale-production -w \""1.6.0"\" \""1.6.1"\" \""1.6.2"\" \""1.6.3"\" \""1.6.4"\"
./witness_node --rpc-endpoint "127.0.0.1:8090" --genesis-json aug-14-test-genesis.json -d test_net -s 45.55.6.216:1776
./witness_node --rpc-endpoint "127.0.0.1:8090" -d test_net -s 45.55.6.216:1776
#Endpoint for P2P node to listen on
p2p-endpoint = 0.0.0.0:1776
# P2P nodes to connect to on startup (may specify multiple times)
# seed-node =
# Pairs of [BLOCK_NUM,BLOCK_ID] that should be enforced as checkpoints.
# checkpoint =
# Endpoint for websocket RPC to listen on
rpc-endpoint = 127.0.0.1:8090
# Endpoint for TLS websocket RPC to listen on
# rpc-tls-endpoint =
# The TLS certificate file for this server
# server-pem =
# Password for this certificate
# server-pem-password =
# File to read Genesis State from
genesis-json = aug-14-test-genesis.json
# JSON file specifying API permissions
# api-access =
# Enable block production, even if the chain is stale.
enable-stale-production = false
# Percent of witnesses (0-99) that must be participating in order to produce blocks
required-participation = false
# Allow block production, even if the last block was produced by the same witness.
allow-consecutive = false
# ID of witness controlled by this node (e.g. "1.6.0", quotes are required, may specify multiple times)
# witness-id =
# Tuple of [PublicKey, WIF private key] (may specify multiple times)
private-key = ["GPH6MRyAjQq8ud7hVNYcfnVPJqcVpscN5So8BhtHuGYqET5GDW5CV","5KQwrPbwdL6PhXujxW37FSSQZ1JiwsST4cqQzDeyXtP79zkvFD3"]
# Account ID to track history for (may specify multiple times)
# track-account =
# Track market history by grouping orders into buckets of equal size measured in seconds specified as a JSON array of numbers
bucket-size = [15,60,300,3600,86400]
# How far back in time to track history for each bucket size, measured in the number of buckets (default: 1000)
history-per-size = 1000
# declare an appender named "stderr" that writes messages to the console
[log.console_appender.stderr]
stream=std_error
# declare an appender named "p2p" that writes messages to p2p.log
[log.file_appender.p2p]
filename=logs/p2p/p2p.log
# filename can be absolute or relative to this config file
# route any messages logged to the default logger to the "stderr" logger we
# declared above, if they are info level are higher
[logger.default]
level=info
appenders=stderr
# route messages sent to the "p2p" logger to the p2p appender declared above
[logger.p2p]
level=debug
appenders=p2p
./witness_node --rpc-endpoint "127.0.0.1:8090" -d test_net -s 45.55.6.216:1776
558187ms th_a witness.cpp:70 plugin_initialize ] key_id_to_wif_pair: ["GPH6MRyAjQq8ud7hVNYcfnVPJqcVpscN5So8BhtHuGYqET5GDW5CV","5KQwrPbwdL6PhXujxW37FSSQZ1JiwsST4cqQzDeyXtP79zkvFD3"]
558188ms th_a application.cpp:228 operator() ] Initializing database...
558191ms th_a main.cpp:173 main ] Exiting with error:
11 eof_exception: End Of File
unexpected end of file
{}
th_a json.cpp:430 variant_from_stream
{"data_dir":"/home/deletech/bts2.0/test_net/blockchain"}
th_a db_management.cpp:94 open
{}
th_a application.cpp:301 startup
./witness_node --rpc-endpoint "127.0.0.1:8090" --resync-blockchain -s 45.55.6.216:1776 -d test_net
2205326ms th_a witness.cpp:70 plugin_initialize ] key_id_to_wif_pair: ["GPH6MRyAjQq8ud7hVNYcfnVPJqcVpscN5So8BhtHuGYqET5GDW5CV","5KQwrPbwdL6PhXujxW37FSSQZ1JiwsST4cqQzDeyXtP79zkvFD3"]
2205327ms th_a db_management.cpp:67 wipe ] Wiping database
2205329ms th_a object_database.cpp:82 wipe ] Wiping object_database.
2205336ms th_a application.cpp:228 operator() ] Initializing database...
2205343ms th_a main.cpp:173 main ] Exiting with error:
11 eof_exception: End Of File
unexpected end of file
{}
th_a json.cpp:430 variant_from_stream
{"data_dir":"/home/deletech/bts2.0/test_net/blockchain"}
th_a db_management.cpp:94 open
{}
th_a application.cpp:301 startup
./witness_node -d new-data-dir
let it start up and then CTRL C to kill it. Then do./witness_node --rpc-endpoint "127.0.0.1:8090" --genesis-json aug-14-test-genesis.json -d new-data-dir -s 45.55.6.216:1776
If that doesn't work then I am completely out of ideas.
Doesn't it need the parameter '--enable-stale-production'?My test witness missed all blocks since I haven't enabled block production.. ;)
I may have done the same but I don't think I was voted in.
How do you enable block production?
Nothing is needed beyond launching the witness_node with the witness ID and public private key pair. You can also add them to the config.ini if you prefer.
only if you are the initial node starting a stale chain. If you are joining a live chain it's not needed.Doesn't it need the parameter '--enable-stale-production'?My test witness missed all blocks since I haven't enabled block production.. ;)
I may have done the same but I don't think I was voted in.
How do you enable block production?
Nothing is needed beyond launching the witness_node with the witness ID and public private key pair. You can also add them to the config.ini if you prefer.
The test net died, but I started another one, so if you want to test until another official one comes up replace the url in bytemasters command with 45.55.6.216:1776I'm in and producing blocks, witness 1.6.5155.
./witness_node --rpc-endpoint "127.0.0.1:8170" --genesis-json "aug-14-test-genesis.json" --data-dir="testnet_puppies_ob" -s "45.55.6.216:1776" --p2p-endpoint "0.0.0.0:62016"
./cli_wallet -w wallet-testnet-puppies-ob.json --chain-id 081401ede64c8fe30b23c91d7ab8750103acb1a39548a866fb562f2edf4627d6 -s ws://127.0.0.1:8170
./witness_node --rpc-endpoint "127.0.0.1:8190" --genesis-json "aug-14-test-genesis.json" --data-dir="testnet_puppies_prod" -s "45.55.6.216:1776" --witness-id '"1.6.5155"' --private-key '["MY_SIGNING_KEY","PRIVATE_KEY_OF_MY_SIGNING_KEY"]'
./cli_wallet -w wallet-testnet-puppies-prod.json --chain-id 081401ede64c8fe30b23c91d7ab8750103acb1a39548a866fb562f2edf4627d6 -s ws://127.0.0.1:8190
I don't think so.. Without '--enable-stale-production' my witness doesn't produce blocks.only if you are the initial node starting a stale chain. If you are joining a live chain it's not needed.Doesn't it need the parameter '--enable-stale-production'?My test witness missed all blocks since I haven't enabled block production.. ;)
I may have done the same but I don't think I was voted in.
How do you enable block production?
Nothing is needed beyond launching the witness_node with the witness ID and public private key pair. You can also add them to the config.ini if you prefer.
I don't think so.. Without '--enable-stale-production' my witness doesn't produce blocks.only if you are the initial node starting a stale chain. If you are joining a live chain it's not needed.Doesn't it need the parameter '--enable-stale-production'?My test witness missed all blocks since I haven't enabled block production.. ;)
I may have done the same but I don't think I was voted in.
How do you enable block production?
Nothing is needed beyond launching the witness_node with the witness ID and public private key pair. You can also add them to the config.ini if you prefer.
It's really confusing. Maybe it's better to have 2 parameters: '--enable-block-production' and '--enable-stale-production'.
I thought the same last night, but I just set up my node earlier without the stale production and it was producing blocks when I left.I don't think so.. Without '--enable-stale-production' my witness doesn't produce blocks.only if you are the initial node starting a stale chain. If you are joining a live chain it's not needed.Doesn't it need the parameter '--enable-stale-production'?My test witness missed all blocks since I haven't enabled block production.. ;)
I may have done the same but I don't think I was voted in.
How do you enable block production?
Nothing is needed beyond launching the witness_node with the witness ID and public private key pair. You can also add them to the config.ini if you prefer.
It's really confusing. Maybe it's better to have 2 parameters: '--enable-block-production' and '--enable-stale-production'.
There are two arguments: -w "1.6.X" means produce for that witness, enable-stale means produce alone.Thanks. Just restarted my witness without enable-stale parameter and it works!
Without '--enable-stale-production' my witness doesn't produce blocks.I thought the same last night, but I just set up my node earlier without the stale production and it was producing blocks when I left.
It's really confusing. Maybe it's better to have 2 parameters: '--enable-block-production' and '--enable-stale-production'.
540991ms th_a witness.cpp:242 block_production_loo ] Witness 1.6.5155 production slot has arrived; generating a bl
ock now...
540992ms th_a witness.cpp:255 block_production_loo ] Generated block #66220 with timestamp 2015-08-18T00:09:01 at
time 2015-08-18T00:09:01
884991ms th_a witness.cpp:191 operator() ] Not producing block because production is disabled.
8
But the real reason of 'production disabled' is "not enabled stale production && it's stale now "? I would not get confused if the log says "... because stale production is disabled".
I was producing blocks, but every so often I had the ntp error.
I have seen that NTP error from time to time myself. We will look into it.
Vikram and I have reviewed the testnet and have identified a patch that would allow us to revive the existing test net by specifying a checkpoint. Once we are sure we can revive the testnet we will start a new network with at least 100 witness slots so that it is less likely for a few bad apples to result in this issue.
It could be that you produced blocks on a fork. If any of you have evidence that you were on a fork (have a block that wasn't included in the main chain) then that is something I am very interested in.
2015-08-18T02:04:12 th_a:Witness Block Production block_production_loo ] slot: 1 scheduled_witness: 1.6.6 scheduled_time: 2015-08-18T0
2:04:13 now: 2015-08-18T02:04:13 witness.cpp:239
2015-08-18T02:04:13 th_a:invoke handle_block handle_block ] Got block #71842 from network application.cp
p:342
2015-08-18T02:04:13 th_a:invoke get_item get_item ] Request for item {"item_type":1001,"item_hash":"000118a2245ede2df39a1
f4ae6da4e12f62f8232"} application.cpp:437
2015-08-18T02:04:13 th_a:invoke get_item get_item ] Serving up block #71842 application.cpp:445
2015-08-18T02:04:13 th_a:Witness Block Production block_production_loo ] slot: 1 scheduled_witness: 1.6.5155 scheduled_time: 2015-08-1
8T02:04:14 now: 2015-08-18T02:04:14 witness.cpp:239
2015-08-18T02:04:13 th_a:Witness Block Production block_production_loo ] Witness 1.6.5155 production slot has arrived; generating a bl
ock now... witness.cpp:242
2015-08-18T02:04:13 th_a:Witness Block Production block_production_loo ] Generated block #71843 with timestamp 2015-08-18T02:04:14 at
time 2015-08-18T02:04:14 witness.cpp:255
2015-08-18T02:04:13 th_a:invoke get_item get_item ] Request for item {"item_type":1001,"item_hash":"000118a30cd6a49579d36
ad1217b098a544a798a"} application.cpp:437
2015-08-18T02:04:13 th_a:invoke get_item get_item ] Serving up block #71843 application.cpp:445
2015-08-18T02:04:14 th_a:Witness Block Production block_production_loo ] slot: 1 scheduled_witness: 1.6.1439 scheduled_time: 2015-08-1
8T02:04:15 now: 2015-08-18T02:04:15 witness.cpp:239
2015-08-18T02:04:15 th_a:Witness Block Production block_production_loo ] slot: 2 scheduled_witness: 1.6.1 scheduled_time: 2015-08-18T0
2:04:16 now: 2015-08-18T02:04:16 witness.cpp:239
2015-08-18T02:04:16 th_a:Witness Block Production block_production_loo ] slot: 3 scheduled_witness: 1.6.0 scheduled_time: 2015-08-18T0
2:04:17 now: 2015-08-18T02:04:17 witness.cpp:239
2015-08-18T02:04:17 th_a:Witness Block Production block_production_loo ] slot: 4 scheduled_witness: 1.6.5 scheduled_time: 2015-08-18T0
2:04:18 now: 2015-08-18T02:04:18 witness.cpp:239
2015-08-18T02:04:18 th_a:Witness Block Production block_production_loo ] slot: 5 scheduled_witness: 1.6.4 scheduled_time: 2015-08-18T0
2:04:19 now: 2015-08-18T02:04:19 witness.cpp:239
2015-08-18T02:04:19 th_a:Witness Block Production block_production_loo ] slot: 6 scheduled_witness: 1.6.1435 scheduled_time: 2015-08-1
8T02:04:20 now: 2015-08-18T02:04:20 witness.cpp:239
2015-08-18T02:04:20 th_a:invoke get_item get_item ] Request for item {"item_type":1001,"item_hash":"000118a30cd6a49579d36
ad1217b098a544a798a"} application.cpp:437
2015-08-18T02:04:20 th_a:invoke get_item get_item ] Serving up block #71843 application.cpp:445
2015-08-18T02:04:20 th_a:invoke handle_block handle_block ] Got block #71844 from network application.cp
p:342
2015-08-18T02:04:20 th_a:invoke handle_block handle_block ] Error when pushing block:
10 assert_exception: Assert Exception
itr != _index.get<block_id>().end():
{}
th_a fork_database.cpp:51 push_block
{"new_block":{"previous":"000118a3ca0d1e6e5737f248ddb53836aa6212e4","timestamp":"2015-08-18T02:04:16","witness":"1.6.1","next_secret_hash":"6449e9c56a67b58d886f2064e929f17f94fe78f1","previous_secret":"68c24af208df75b7b3d6570206492ef3bee5c5b9","transaction_merkle_root":"0000000000000000000000000000000000000000","extensions":[],"witness_signature":"1f4becadfa61a394b550ce6d086fb5d9eb3da8c883ac703048c7baee139414089f6777d1c03787d982160d23ecef7df3df3ea8713c15441247c30c4ef5568e22ac","transactions":[]}}
th_a db_block.cpp:173 _push_block application.cpp:364
2015-08-18T02:04:20 th_a:Witness Block Production block_production_loo ] slot: 7 scheduled_witness: 1.6.2 scheduled_time: 2015-08-18T02:04:21 now: 2015-08-18T02:04:21 witness.cpp:239
2015-08-18T02:04:21 th_a:invoke handle_block handle_block ] Got block #71845 from network application.cpp:342
2015-08-18T02:04:21 th_a:invoke handle_block handle_block ] Error when pushing block:
10 assert_exception: Assert Exception
itr != _index.get<block_id>().end():
{}
th_a fork_database.cpp:51 push_block
{"new_block":{"previous":"000118a4b95f7448500f912995feeffe47e61773","timestamp":"2015-08-18T02:04:17","witness":"1.6.0","next_secret_hash":"6449e9c56a67b58d886f2064e929f17f94fe78f1","previous_secret":"68c24af208df75b7b3d6570206492ef3bee5c5b9","transaction_merkle_root":"0000000000000000000000000000000000000000","extensions":[],"witness_signature":"2041523e0275f0ce506c797145c86f2b5ed5fb7d9eb9cff0f1cb9364e5a06c327e535dfd76846d56824d41ad69e502d2b88679b7060b33200128991dacf92f97ae","transactions":[]}}
th_a db_block.cpp:173 _push_block application.cpp:364
251230ms th_a application.cpp:342 handle_block ] Got block #71840 from network
252602ms th_a application.cpp:342 handle_block ] Got block #71841 from network
253600ms th_a application.cpp:342 handle_block ] Got block #71842 from network
253998ms th_a application.cpp:342 handle_block ] Got block #71843 from network
440984ms th_a application.cpp:486 get_blockchain_synop ] reference_point: 0000000000000000000000000000000000000000 num
ber_of_blocks_after_reference_point: 0 result: ["000018a317c85eac7ed13615354fcef049951d86","000098a3e1624c807fd57d019d0dcf58844905ac",
"0000d8a3f2164ee15e1d2404cf5815daaf024091","0000f8a34a36120b0ccf1ed468f1e5c1a0b81239","000108a338ff000b4ea31a71938f19d6668a330e","0001
10a3079a6cdffeb63507c7c4a44406221c1b","000114a31f31315fe4d4f9a3c18923029b506111","000116a318c63b35c2dbdd46e674b4c49dcf0673","000117a3c
2bdf2658c7d67a5f9040e65ac98a02d","00011823272051adb5851eb6617b11145a03979e","0001186341115fddf7f06780b5164e5254324f96","000118833ab2b2
fc44045ebb63376313df4cf712","0001189332e4dc39800f335bbd4cf10135617a33","0001189b4b78dec8eb8efebeb5b9fab06e6647ad","0001189fcd7f19f8469
b604041fd14a225ce5bae","000118a1ac5579df7e3f7952bf3889e7f6cc8413","000118a2245ede2df39a1f4ae6da4e12f62f8232","000118a30cd6a49579d36ad1
217b098a544a798a"]
466784ms th_a application.cpp:486 get_blockchain_synop ] reference_point: 0000000000000000000000000000000000000000 num
ber_of_blocks_after_reference_point: 0 result: ["000018a317c85eac7ed13615354fcef049951d86","000098a3e1624c807fd57d019d0dcf58844905ac",
"0000d8a3f2164ee15e1d2404cf5815daaf024091","0000f8a34a36120b0ccf1ed468f1e5c1a0b81239","000108a338ff000b4ea31a71938f19d6668a330e","0001
10a3079a6cdffeb63507c7c4a44406221c1b","000114a31f31315fe4d4f9a3c18923029b506111","000116a318c63b35c2dbdd46e674b4c49dcf0673","000117a3c
2bdf2658c7d67a5f9040e65ac98a02d","00011823272051adb5851eb6617b11145a03979e","0001186341115fddf7f06780b5164e5254324f96","000118833ab2b2
fc44045ebb63376313df4cf712","0001189332e4dc39800f335bbd4cf10135617a33","0001189b4b78dec8eb8efebeb5b9fab06e6647ad","0001189fcd7f19f8469
b604041fd14a225ce5bae","000118a1ac5579df7e3f7952bf3889e7f6cc8413","000118a2245ede2df39a1f4ae6da4e12f62f8232","000118a30cd6a49579d36ad1
217b098a544a798a"]
I finally managed to get it working on puppie's test net. I'm embarassed to say what was causing all of my difficulties, so I won't! :-[
Tomorrow is another day, and I will have much better luck I'm sure.
I finally managed to get it working on puppie's test net. I'm embarassed to say what was causing all of my difficulties, so I won't! :-[
Tomorrow is another day, and I will have much better luck I'm sure.
"head_block_num": 72042.
2015-08-18T14:46:05 p2p:terminate_inactive_connections_loop terminate_inactive_c ] Disconnecting peer 45.55.6.216:1776 because they didn't respond to my request for sync item 00000064848aa88b1476bf797ad9560c08b4b239 node.cpp:1307
"head_block_num": 113869,
"head_block_id": "0001bccd0acdf8b05c58b6cc5e6e36d717de523c",
"head_block_age": "2 seconds old",
"next_maintenance_time": "4 minutes in the future",
"chain_id": "081401ede64c8fe30b23c91d7ab8750103acb1a39548a866fb562f2edf4627d6",
It seems like your node did fork and now wont accept the proper chain.get_block 72042
{
"previous": "000119696d1d31776e6431586e14fd2e4c0b4f58",
"timestamp": "2015-08-18T02:07:59",
"witness": "1.6.1435",
"next_secret_hash": "72dcf0475fa5d0c145a235f590b5be3597c17753",
"previous_secret": "2813946aacd0bfda5541bf243f5bc47aee51e314",
"transaction_merkle_root": "0000000000000000000000000000000000000000",
"extensions": [],
"witness_signature": "201506e5034d1df8627196bc97b5293eed3efbb6391a256789ca4a6912f169b45b51045f55f9be468a2db485bfcc4952996e10d1f4be0cc1c0eef2764470649035",
"transactions": []
}
locked >>> get_block 72043
get_block 72043
{
"previous": "0001196a1eb9d0e2021fd4f6ac9fe452ea46d4b3",
"timestamp": "2015-08-18T02:08:00",
"witness": "1.6.0",
"next_secret_hash": "031a792684e0d99dab315ce6aea4fa92c85c6707",
"previous_secret": "cd722b49f39a4b7160e081ae2c26cdf71f8e9194",
"transaction_merkle_root": "0000000000000000000000000000000000000000",
"extensions": [],
"witness_signature": "1f64cf6d9e48683ed2259772c5006c0a1ec195dc280c59b5941250fa1581e8b86c0d32ed49d47d9d57cc87b3088e2f52bc15c7dc500955fa334864a3ed7b08be63",
"transactions": []
}
locked >>> get_block 72044
get_block 72044
{
"previous": "0001196bf9cf506fb01511e20c8566694b952a7d",
"timestamp": "2015-08-18T02:08:01",
"witness": "1.6.4",
"next_secret_hash": "42899548ec99624cb2102330c429562510ef09a0",
"previous_secret": "2777b474efe4774a69d716f61b5cf156deb3a232",
"transaction_merkle_root": "0000000000000000000000000000000000000000",
"extensions": [],
"witness_signature": "1f02312f66e1dcf78335d1c5ee999181cd3a0191d927d48376fa5702a493d418514d01eb530328c2dee956d5824103b8576e7974049e0d414eab142fd3ddbbe991",
"transactions": []
}
locked >>>
im curious is your block 72042 matches mine.
For those getting error messages about your "clock" being out of sync, the issue was a bad error message. Once every maintenance interval the blockchain skips some slots (currently configured to be 3) so that nodes can perform computational tasks that may take longer than a fraction of the normal block interval to complete. These tasks include tallying votes among other things. I have revised the error message to be a status message indicating that it is probably just a maintenance interval.
https://github.com/cryptonomex/graphene/issues/244
The network is still up and generating blocks on at least three different systems.Looks like my node produced a #72042 block but failed to broadcast it in time (networking issue) then got stuck. It got #72043 and following blocks but failed to push them into chain database, because those blocks are linked to another #72042, but it didn't request for that #72042.Code: [Select]"head_block_num": 113869,
It seems like your node did fork and now wont accept the proper chain.
"head_block_id": "0001bccd0acdf8b05c58b6cc5e6e36d717de523c",
"head_block_age": "2 seconds old",
"next_maintenance_time": "4 minutes in the future",
"chain_id": "081401ede64c8fe30b23c91d7ab8750103acb1a39548a866fb562f2edf4627d6",Code: [Select]get_block 72042
im curious is your block 72042 matches mine.
{
"previous": "000119696d1d31776e6431586e14fd2e4c0b4f58",
"timestamp": "2015-08-18T02:07:59",
"witness": "1.6.1435",
"next_secret_hash": "72dcf0475fa5d0c145a235f590b5be3597c17753",
"previous_secret": "2813946aacd0bfda5541bf243f5bc47aee51e314",
"transaction_merkle_root": "0000000000000000000000000000000000000000",
"extensions": [],
"witness_signature": "201506e5034d1df8627196bc97b5293eed3efbb6391a256789ca4a6912f169b45b51045f55f9be468a2db485bfcc4952996e10d1f4be0cc1c0eef2764470649035",
"transactions": []
}
locked >>> get_block 72043
get_block 72043
{
"previous": "0001196a1eb9d0e2021fd4f6ac9fe452ea46d4b3",
"timestamp": "2015-08-18T02:08:00",
"witness": "1.6.0",
"next_secret_hash": "031a792684e0d99dab315ce6aea4fa92c85c6707",
"previous_secret": "cd722b49f39a4b7160e081ae2c26cdf71f8e9194",
"transaction_merkle_root": "0000000000000000000000000000000000000000",
"extensions": [],
"witness_signature": "1f64cf6d9e48683ed2259772c5006c0a1ec195dc280c59b5941250fa1581e8b86c0d32ed49d47d9d57cc87b3088e2f52bc15c7dc500955fa334864a3ed7b08be63",
"transactions": []
}
locked >>> get_block 72044
get_block 72044
{
"previous": "0001196bf9cf506fb01511e20c8566694b952a7d",
"timestamp": "2015-08-18T02:08:01",
"witness": "1.6.4",
"next_secret_hash": "42899548ec99624cb2102330c429562510ef09a0",
"previous_secret": "2777b474efe4774a69d716f61b5cf156deb3a232",
"transaction_merkle_root": "0000000000000000000000000000000000000000",
"extensions": [],
"witness_signature": "1f02312f66e1dcf78335d1c5ee999181cd3a0191d927d48376fa5702a493d418514d01eb530328c2dee956d5824103b8576e7974049e0d414eab142fd3ddbbe991",
"transactions": []
}
locked >>>
Last night when I was able to successfully start the witness node and connect the cli wallet to it I had to change the chain-id to what the witness reported rather than the value in the OP. In looking at the puppies post today (quoted by abit) the chain-id is back to original in OP.
The 2 values I've seen follow. Would someone explain what is the chain-id and why does it keep changing? Are we running separate tests on each chain or is it simply a test of stamina to make sure "advanced" people can keep up with changes here :)
1) 081401ede64c8fe30b23c91d7ab8750103acb1a39548a866fb562f2edf4627d6
2) a629fd737e0b4d8becc25e33b90984840723b38cd4430049693059f6e396b89c
I suspect the chain-id I connected with last night was some temporary chain puppies ran for a time. I'll use the value in the OP if that's where we're all converging. I now need to create an account, migrate a balance so it can be upgraded to lifetime membership and then register it as a witness. Is all that covered by xeroc's doc linked in the OP? Has anything changed in that process?
For example, should I use puppies seed IP or the one in the OP? Does that even matter? My mind is buzzing with all these details :o
get_witness delegate.verbaltech
{
"id": "1.6.1530",
"witness_account": "1.2.22307",
"signing_key": "GPH52ms1dYJko2v5vS3rCdVLzQBogjeDRc1CpkaZ4seC4J4H7Uc71",
"next_secret_hash": "5c00bd4aca04a9057c09b20b05f723f2e23deb65",
"previous_secret": "0000000000000000000000000000000000000000",
"vote_id": "1:1530",
"total_votes": 0,
"url": ""
}
You need to launch the witness node with your id 1.6.1530. your public signing key, and the corresponding private key. Init delegates have 0 votes so you can be voted in with limited funds. let me know when you're up and running and I can vote you in../witness_node --rpc-endpoint "127.0.0.1:8090" --resync-blockchain -d test_net -s 45.55.6.216:1776
./cli_wallet -w test_wallet --chain-id 081401ede64c8fe30b23c91d7ab8750103acb1a39548a866fb562f2edf4627d6
h_items_loop ] beginning an iteration of fetch items (0 items to fetch) node.cpp:1104
2015-08-18T16:27:33 p2p:message read_loop on_message ] handling message block_message_type 1a3a73d360660de01a4b1f9dfcc5add8d5ee3aa4 size 173 from peer 114.92.254.159:62016 node.cpp:1651
2015-08-18T16:27:33 p2p:message read_loop process_block_during ] received a block from peer 114.92.254.159:62016, passing it to client node.cpp:3087
2015-08-18T16:27:33 p2p:message read_loop process_block_during ] Successfully pushed block 118179 (id:0001cda313b6a140c0f384624e632c95fbed9804) node.cpp:3109
2015-08-18T16:27:33 p2p:message read_loop process_block_during ] client validated the block, advertising it to other peers node.cpp:3133
2015-08-18T16:27:33 p2p:message read_loop clear_old_inventory ] Expiring old inventory for peer 114.92.254.159:62016: removing 0 items advertised to peer (0 left), and 0 advertised to us (82 left) peer_connection.cpp:479
2015-08-18T16:27:33 p2p:message read_loop clear_old_inventory ] Expiring old inventory for peer 45.55.6.216:1776: removing 0 items advertised to peer (12 left), and 0 advertised to us (108 left) peer_connection.cpp:479
2015-08-18T16:27:33 p2p:advertise_inventory_loop advertise_inventory_ ] beginning an iteration of advertise inventory node.cpp:1175
2015-08-18T16:27:33 p2p:advertise_inventory_loop advertise_inventory_ ] peer->peer_needs_sync_items_from_us: true node.cpp:1188
2015-08-18T16:27:33 p2p:advertise_inventory_loop clear_old_inventory ] Expiring old inventory for peer 114.92.254.159:62016: removing 0 items advertised to peer (0 left), and 0 advertised to us (82 left) peer_connection.cpp:479
2015-08-18T16:27:33 p2p:advertise_inventory_loop advertise_inventory_ ] peer->peer_needs_sync_items_from_us: false node.cpp:1188
2015-08-18T16:27:33 p2p:advertise_inventory_loop advertise_inventory_ ] inventory_to_advertise: [{"item_type":1001,"item_hash":"1a3a73d360660de01a4b1f9dfcc5add8d5ee3aa4"}] node.cpp:1196
2015-08-18T16:27:33 p2p:advertise_inventory_loop advertise_inventory_ ] *peer->inventory_advertised_to_peer.find(item_to_advertise): {"item":{"item_type":1476398304,"item_hash":"847f0000000000000000000078a30558847f0000"},"timestamp":"2016-10-13T22:37:12"} node.cpp:1200
2015-08-18T16:27:33 p2p:advertise_inventory_loop advertise_inventory_ ] advertising 0 new item(s) of 0 type(s) to peer 45.55.6.216:1776 node.cpp:1218
2015-08-18T16:27:33 p2p:advertise_inventory_loop clear_old_inventory ] Expiring old inventory for peer 45.55.6.216:1776: removing 0 items advertised to peer (12 left), and 0 advertised to us (108 left) peer_connection.cpp:479
2015-08-18T16:27:33 p2p:fetch_items_loop fetch_items_loop ] beginning an iteration of fetch items (0 items to fetch) node.cpp:1104
2015-08-18T16:27:33 p2p:message read_loop on_message ] handling message item_ids_inventory_message_type e8c9515d4e56d73cbb49ed5d56eedf4d6901b073 size 25 from peer 114.92.254.159:62016 node.cpp:1651
2015-08-18T16:27:33 p2p:message read_loop clear_old_inventory ] Expiring old inventory for peer 114.92.254.159:62016: removing 0 items advertised to peer (0 left), and 0 advertised to us (82 left) peer_connection.cpp:479
2015-08-18T16:27:33 p2p:message read_loop on_item_ids_inventor ] received inventory of 1 items from peer 114.92.254.159:62016 node.cpp:2613
2015-08-18T16:27:33 p2p:message read_loop on_item_ids_inventor ] adding item ac3580aa78a43aa6cab5e0599c05aa65a0818117 from inventory message to our list of items to fetch node.cpp:2647
2015-08-18T16:27:33 p2p:fetch_items_loop fetch_items_loop ] beginning an iteration of fetch items (1 items to fetch) node.cpp:1104
2015-08-18T16:27:33 p2p:fetch_items_loop fetch_items_loop ] requesting item ac3580aa78a43aa6cab5e0599c05aa65a0818117 from peer 114.92.254.159:62016 node.cpp:1123
2015-08-18T16:27:33 p2p:fetch_items_loop send_message ] peer_connection::send_message() enqueueing message of type 5004 for peer 114.92.254.159:62016 peer_connection.cpp:365
2015-08-18T16:27:33 p2p:fetch_items_loop send_queueable_messa ] peer_connection::send_message() is firing up send_queued_message_task peer_connection.cpp:354
2015-08-18T16:27:33 p2p:send_queued_messages_task counter ] entering peer_connection::send_queued_messages_task() peer_connection.cpp:279
2015-08-18T16:27:33 p2p:send_queued_messages_task send_queued_messages ] peer_connection::send_queued_messages_task() calling message_oriented_connection::send_message() to send message of type 5004 for peer 114.92.254.159:62016 peer_connection.cpp:291
2015-08-18T16:27:33 p2p:send_queued_messages_task send_queued_messages ] peer_connection::send_queued_messages_task()'s call to message_oriented_connection::send_message() completed normally for peer 114.92.254.159:62016 peer_connection.cpp:294
2015-08-18T16:27:33 p2p:send_queued_messages_task send_queued_messages ] leaving peer_connection::send_queued_messages_task() due to queue exhaustion peer_connection.cpp:326
2015-08-18T16:27:33 p2p:send_queued_messages_task ~counter ] leaving peer_connection::send_queued_messages_task() peer_connection.cpp:280
2015-08-18T16:27:34 p2p:message read_loop on_message ] handling message item_ids_inventory_message_type 273c1c45d465267877955f6a74879444f50acb87 size 25 from peer 45.55.6.216:1776 node.cpp:1651
2015-08-18T16:27:34 p2p:message read_loop clear_old_inventory ] Expiring old inventory for peer 45.55.6.216:1776: removing 0 items advertised to peer (12 left), and 1 advertised to us (107 left) peer_connection.cpp:479
2015-08-18T16:27:34 p2p:message read_loop on_item_ids_inventor ] received inventory of 1 items from peer 45.55.6.216:1776 node.cpp:2613
2015-08-18T16:27:34 p2p:message read_loop on_item_ids_inventor ] adding item 68653fd811768cb6bb2ef6a63d73bdf9298beb95 from inventory message to our list of items to fetch node.cpp:2647
2015-08-18T16:27:34 p2p:fetch_items_loop fetch_items_loop ] beginning an iteration of fetch items (1 items to fetch) node.cpp:1104
2015-08-18T16:27:34 p2p:fetch_items_loop fetch_items_loop ] requesting item 68653fd811768cb6bb2ef6a63d73bdf9298beb95 from peer 45.55.6.216:1776 node.cpp:1123
2015-08-18T16:27:34 p2p:fetch_items_loop send_message ] peer_connection::send_message() enqueueing message of type 5004 for peer 45.55.6.216:1776 peer_connection.cpp:365
2015-08-18T16:27:34 p2p:fetch_items_loop send_queueable_messa ] peer_connection::send_message() is firing up send_queued_message_task peer_connection.cpp:354
2015-08-18T16:27:34 p2p:send_queued_messages_task counter ] entering peer_connection::send_queued_messages_task() peer_connection.cpp:279
2015-08-18T16:27:34 p2p:send_queued_messages_task send_queued_messages ] peer_connection::send_queued_messages_task() calling message_oriented_connection::send_message() to send message of type 5004 for peer 45.55.6.216:1776 peer_connection.cpp:291
2015-08-18T16:27:34 p2p:send_queued_messages_task send_queued_messages ] peer_connection::send_queued_messages_task()'s call to message_oriented_connection::send_message() completed normally for peer 45.55.6.216:1776 peer_connection.cpp:294
2015-08-18T16:27:34 p2p:send_queued_messages_task send_queued_messages ] leaving peer_connection::send_queued_messages_task() due to queue exhaustion peer_connection.cpp:326
2015-08-18T16:27:34 p2p:send_queued_messages_task ~counter ] leaving peer_connection::send_queued_messages_task() peer_connection.cpp:280
2015-08-18T16:27:34 p2p:message read_loop on_message ] handling message block_message_type 68653fd811768cb6bb2ef6a63d73bdf9298beb95 size 173 from peer 45.55.6.216:1776 node.cpp:1651
2015-08-18T16:27:34 p2p:message read_loop process_block_during ] received a block from peer 45.55.6.216:1776, passing it to client node.cpp:3087
2015-08-18T16:27:34 p2p:message read_loop process_block_during ] Failed to push block 118181 (id:0001cda58068604136998b1998eb09a3b8b3fa56), client rejected block sent by peer node.cpp:3198
2015-08-18T16:27:34 p2p:fetch_items_loop fetch_items_loop ] beginning an iteration of fetch items (0 items to fetch) node.cpp:1104
2015-08-18T16:27:34 p2p:message read_loop on_message ] handling message block_message_type ac3580aa78a43aa6cab5e0599c05aa65a0818117 size 172 from peer 114.92.254.159:62016 node.cpp:1651
2015-08-18T16:27:34 p2p:message read_loop process_block_during ] received a block from peer 114.92.254.159:62016, passing it to client node.cpp:3087
2015-08-18T16:27:34 p2p:message read_loop process_block_during ] Successfully pushed block 118180 (id:0001cda4fff8b59d6242e9cfbb706492a6b250ea) node.cpp:3109
2015-08-18T16:27:34 p2p:message read_loop process_block_during ] client validated the block, advertising it to other peers node.cpp:3133
2015-08-18T16:27:34 p2p:message read_loop clear_old_inventory ] Expiring old inventory for peer 114.92.254.159:62016: removing 0 items advertised to peer (0 left), and 0 advertised to us (83 left) peer_connection.cpp:479
2015-08-18T16:27:34 p2p:message read_loop clear_old_inventory ] Expiring old inventory for peer 45.55.6.216:1776: removing 0 items advertised to peer (12 left), and 0 advertised to us (108 left) peer_connection.cpp:479
2015-08-18T16:27:34 p2p:advertise_inventory_loop advertise_inventory_ ] beginning an iteration of advertise inventory node.cpp:1175
2015-08-18T16:27:34 p2p:advertise_inventory_loop advertise_inventory_ ] peer->peer_needs_sync_items_from_us: true node.cpp:1188
2015-08-18T16:27:34 p2p:advertise_inventory_loop clear_old_inventory ] Expiring old inventory for peer 114.92.254.159:62016: removing 0 items advertised to peer (0 left), and 0 advertised to us (83 left) peer_connection.cpp:479
2015-08-18T16:27:34 p2p:advertise_inventory_loop advertise_inventory_ ] peer->peer_needs_sync_items_from_us: false node.cpp:1188
2015-08-18T16:27:34 p2p:advertise_inventory_loop advertise_inventory_ ] inventory_to_advertise: [{"item_type":1001,"item_hash":"ac3580aa78a43aa6cab5e0599c05aa65a0818117"}] node.cpp:1196
2015-08-18T16:27:34 p2p:advertise_inventory_loop advertise_inventory_ ] *peer->inventory_advertised_to_peer.find(item_to_advertise): {"item":{"item_type":1476398304,"item_hash":"847f0000000000000000000078a30558847f0000"},"timestamp":"2016-10-13T22:37:12"} node.cpp:1200
2015-08-18T16:27:34 p2p:advertise_inventory_loop advertise_inventory_ ] advertising 0 new item(s) of 0 type(s) to peer 45.55.6.216:1776 node.cpp:1218
2015-08-18T16:27:34 p2p:advertise_inventory_loop clear_old_inventory ] Expiring old inventory for peer 45.55.6.216:1776: removing 0 items advertised to peer (12 left), and 0 advertised to us (108 left) peer_connection.cpp:479
2015-08-18T16:27:34 p2p:fetch_items_loop fetch_items_loop ] beginning an iteration of fetch items (0 items to fetch) node.cpp:1104
2015-08-18T16:27:35 p2p:message read_loop on_message ] handling message item_ids_inventory_message_type 7961814825b2c7f2b262b8e14426da647ff4b137 size 25 from peer 45.55.6.216:1776 node.cpp:1651
2015-08-18T16:27:35 p2p:message read_loop clear_old_inventory ] Expiring old inventory for peer 45.55.6.216:1776: removing 0 items advertised to peer (12 left), and 1 advertised to us (107 left) peer_connection.cpp:479
2015-08-18T16:27:35 p2p:message read_loop on_item_ids_inventor ] received inventory of 1 items from peer 45.55.6.216:1776 node.cpp:2613
2015-08-18T16:27:35 p2p:message read_loop on_item_ids_inventor ] adding item f2420b4d7371ddf4aa1cb07d5561a0b65e6f1014 from inventory message to our list of items to fetch node.cpp:2647
2015-08-18T16:27:35 p2p:fetch_items_loop fetch_items_loop ] beginning an iteration of fetch items (1 items to fetch) node.cpp:1104
2015-08-18T16:27:35 p2p:fetch_items_loop fetch_items_loop ] requesting item f2420b4d7371ddf4aa1cb07d5561a0b65e6f1014 from peer 45.55.6.216:1776 node.cpp:1123
2015-08-18T16:27:35 p2p:fetch_items_loop send_message ] peer_connection::send_message() enqueueing message of type 5004 for peer 45.55.6.216:1776 peer_connection.cpp:365
2015-08-18T16:27:35 p2p:fetch_items_loop send_queueable_messa ] peer_connection::send_message() is firing up send_queued_message_task peer_connection.cpp:354
2015-08-18T16:27:35 p2p:send_queued_messages_task counter ] entering peer_connection::send_queued_messages_task() peer_connection.cpp:279
2015-08-18T16:27:35 p2p:send_queued_messages_task send_queued_messages ] peer_connection::send_queued_messages_task() calling message_oriented_connection::send_message() to send message of type 5004 for peer 45.55.6.216:1776 peer_connection.cpp:291
2015-08-18T16:27:35 p2p:send_queued_messages_task send_queued_messages ] peer_connection::send_queued_messages_task()'s call to message_oriented_connection::send_message() completed normally for peer 45.55.6.216:1776 peer_connection.cpp:294
2015-08-18T16:27:35 p2p:send_queued_messages_task send_queued_messages ] leaving peer_connection::send_queued_messages_task() due to queue exhaustion peer_connection.cpp:326
2015-08-18T16:27:35 p2p:send_queued_messages_task ~counter ] leaving peer_connection::send_queued_messages_task() peer_connection.cpp:280
2015-08-18T16:27:35 p2p:message read_loop on_message ] handling message block_message_type f2420b4d7371ddf4aa1cb07d5561a0b65e6f1014 size 172 from peer 45.55.6.216:1776 node.cpp:1651
2015-08-18T16:27:35 p2p:message read_loop process_block_during ] received a block from peer 45.55.6.216:1776, passing it to client node.cpp:3087
2015-08-18T16:27:35 p2p:message read_loop process_block_during ] Failed to push block 118182 (id:0001cda68df8d489ac600fa93f814b7cf7a18a94), client rejected block sent by peer node.cpp:3198
2015-08-18T16:27:35 p2p:fetch_items_loop fetch_items_loop ] beginning an iteration of fetch items (0 items to fetch) node.cpp:1104
2015-08-18T16:27:35 p2p:message read_loop on_message ] handling message item_ids_inventory_message_type 273c1c45d465267877955f6a74879444f50acb87 size 25 from peer 114.92.254.159:62016 node.cpp:1651
2015-08-18T16:27:35 p2p:message read_loop clear_old_inventory ] Expiring old inventory for peer 114.92.254.159:62016: removing 0 items advertised to peer (0 left), and 0 advertised to us (83 left) peer_connection.cpp:479
2015-08-18T16:27:35 p2p:message read_loop on_item_ids_inventor ] received inventory of 1 items from peer 114.92.254.159:62016 node.cpp:2613
2015-08-18T16:27:35 p2p:message read_loop on_item_ids_inventor ] adding item 68653fd811768cb6bb2ef6a63d73bdf9298beb95 from inventory message to our list of items to fetch node.cpp:2647
2015-08-18T16:27:35 p2p:fetch_items_loop fetch_items_loop ] beginning an iteration of fetch items (1 items to fetch) node.cpp:1104
2015-08-18T16:27:35 p2p:fetch_items_loop fetch_items_loop ] requesting item 68653fd811768cb6bb2ef6a63d73bdf9298beb95 from peer 114.92.254.159:62016 node.cpp:1123
2015-08-18T16:27:35 p2p:fetch_items_loop send_message ] peer_connection::send_message() enqueueing message of type 5004 for peer 114.92.254.159:62016 peer_connection.cpp:365
2015-08-18T16:27:35 p2p:fetch_items_loop send_queueable_messa ] peer_connection::send_message() is firing up send_queued_message_task peer_connection.cpp:354
2015-08-18T16:27:35 p2p:send_queued_messages_task counter ] entering peer_connection::send_queued_messages_task() peer_connection.cpp:279
2015-08-18T16:27:35 p2p:send_queued_messages_task send_queued_messages ] peer_connection::send_queued_messages_task() calling message_oriented_connection::send_message() to send message of type 5004 for peer 114.92.254.159:62016 peer_connection.cpp:291
2015-08-18T16:27:35 p2p:send_queued_messages_task send_queued_messages ] peer_connection::send_queued_messages_task()'s call to message_oriented_connection::send_message() completed normally for peer 114.92.254.159:62016 peer_connection.cpp:294
2015-08-18T16:27:35 p2p:send_queued_messages_task send_queued_messages ] leaving peer_connection::send_queued_messages_task() due to queue exhaustion peer_connection.cpp:326
2015-08-18T16:27:35 p2p:send_queued_messages_task ~counter ] leaving peer_connection::send_queued_messages_task() peer_connection.cpp:280
2015-08-18T16:27:35 p2p:message read_loop on_message ] handling message block_message_type 68653fd811768cb6bb2ef6a63d73bdf9298beb95 size 173 from peer 114.92.254.159:62016 node.cpp:1651
2015-08-18T16:27:35 p2p:message read_loop process_block_during ] received a block from peer 114.92.254.159:62016, passing it to client node.cpp:3087
2015-08-18T16:27:35 p2p:message read_loop process_block_during ] Successfully pushed block 118181 (id:0001cda58068604136998b1998eb09a3b8b3fa56) node.cpp:3109
2015-08-18T16:27:35 p2p:message read_loop process_block_during ] client validated the block, advertising it to other peers node.cpp:3133
2015-08-18T16:27:35 p2p:message read_loop clear_old_inventory ] Expiring old inventory for peer 114.92.254.159:62016: removing 0 items advertised to peer (0 left), and 0 advertised to us (84 left) peer_connection.cpp:479
2015-08-18T16:27:35 p2p:message read_loop clear_old_inventory ] Expiring old inventory for peer 45.55.6.216:1776: removing 0 items advertised to peer (12 left), and 0 advertised to us (108 left) peer_connection.cpp:479
2015-08-18T16:27:35 p2p:advertise_inventory_loop advertise_inventory_ ] beginning an iteration of advertise inventory node.cpp:1175
2015-08-18T16:27:35 p2p:advertise_inventory_loop advertise_inventory_ ] peer->peer_needs_sync_items_from_us: true node.cpp:1188
2015-08-18T16:27:35 p2p:advertise_inventory_loop clear_old_inventory ] Expiring old inventory for peer 114.92.254.159:62016: removing 0 items advertised to peer (0 left), and 0 advertised to us (84 left) peer_connection.cpp:479
2015-08-18T16:27:35 p2p:advertise_inventory_loop advertise_inventory_ ] peer->peer_needs_sync_items_from_us: false node.cpp:1188
2015-08-18T16:27:35 p2p:advertise_inventory_loop advertise_inventory_ ] inventory_to_advertise: [{"item_type":1001,"item_hash":"68653fd811768cb6bb2ef6a63d73bdf9298beb95"}] node.cpp:1196
2015-08-18T16:27:35 p2p:advertise_inventory_loop advertise_inventory_ ] *peer->inventory_advertised_to_peer.find(item_to_advertise): {"item":{"item_type":1476398304,"item_hash":"847f0000000000000000000078a30558847f0000"},"timestamp":"2016-10-13T22:37:12"} node.cpp:1200
2015-08-18T16:27:35 p2p:advertise_inventory_loop advertise_inventory_ ] advertising 0 new item(s) of 0 type(s) to peer 45.55.6.216:1776 node.cpp:1218
2015-08-18T16:27:35 p2p:advertise_inventory_loop clear_old_inventory ] Expiring old inventory for peer 45.55.6.216:1776: removing 0 items advertised to peer (12 left), and 0 advertised to us (108 left) peer_connection.cpp:479
2015-08-18T16:27:35 p2p:fetch_items_loop fetch_items_loop ] beginning an iteration of fetch items (0 items to fetch) node.cpp:1104
Thanks puppies. I will review those posts.
I just now started a witness node using:Code: [Select]./witness_node --rpc-endpoint "127.0.0.1:8090" --resync-blockchain -d test_net -s 45.55.6.216:1776
./cli_wallet -w test_wallet --chain-id 081401ede64c8fe30b23c91d7ab8750103acb1a39548a866fb562f2edf4627d6
and the chain-id reported by the cli_wallet in the info api call is "a629fd737e0b4d8becc25e33b90984840723b38cd4430049693059f6e396b89c", not what was provided on the cmd line. Is that b/c I'm connecting to your seed?
Again, just trying to be on the same page as everyone testing here, but I'm not sure I am, which is why I am asking all these questions.
Also, is it necessary to do the set_password every time you start the wallet? Where is the wallet folder stored? It doesn't seem to be in the same folder as cli_wallet. I have reviewed the "cli wallet cookbook" but it didn't answer that. Every time I start the cli_wallet I have to set the password, why?
Witness questions
I am surprised to hear delegate.verbaltech is already registered as a witness. When did that happen? I was under the distinct impression from reading xeroc's doc that getting to that stage took a number of steps, like migrating balances, upgrading to lifetime member status etc.
I found the info on how to I specify the witness id, there may be a problem on using the signing keys for delegate.verbaltech. I'm sending you a PM about that issue, which may be the last huddle to jump over.
#!/bin/bash
cd ~/bts2.0/graphene/programs/witness_node;
./witness_node --rpc-endpoint "127.0.0.1:8090" --resync-blockchain --witness-id '"1.6.1530"' --private-key '["MY_SIGNING_KEY","PRIVATE_KEY_OF_MY_SIGNING_KEY"]' -d test_net -s 45.55.6.216:1776
#!/bin/bash
cd ~/bts2.0/graphene/programs/cli_wallet;
./cli_wallet -w test_wallet --chain-id 081401ede64c8fe30b23c91d7ab8750103acb1a39548a866fb562f2edf4627d6
try addingCode: [Select]#!/bin/bash
cd ~/bts2.0/graphene/programs/witness_node;
./witness_node --rpc-endpoint "127.0.0.1:8090" --resync-blockchain --witness-id '"1.6.1530"' --private-key '["MY_SIGNING_KEY","PRIVATE_KEY_OF_MY_SIGNING_KEY"]' -d test_net -s 45.55.6.216:1776
--genesis-json aug-14-test-genesis.json
so the final line in you script will be ./witness_node --rpc-endpoint "127.0.0.1:8090" --resync-blockchain --witness-id '"1.6.1530"' --private-key '["MY_SIGNING_KEY","PRIVATE_KEY_OF_MY_SIGNING_KEY"]' -d test_net -s 45.55.6.216:1776 --genesis-json aug-14-test-genesis.json
Those who are reporting that they are stuck on a fork I am very interested in hearing more details.If I am reading the logs correctly in my case it was not a generation issue, but a propogation issues. It looks like my node received block 118181 before it got 118180. It tried to push 118181 failed, then got 118180 pushed it successfully, but never went back and tried to push 118181 again so when 118182 came along it also failed to push.
If my understanding is correct it will fork if the block you produced doesn't get included in the final chain. I will try to replicate this behavior in a test.
The network is still up and generating blocks on at least three different systems.Looks like my node produced a #72042 block but failed to broadcast it in time (networking issue) then got stuck. It got #72043 and following blocks but failed to push them into chain database, because those blocks are linked to another #72042, but it didn't request for that #72042.Code: [Select]"head_block_num": 113869,
It seems like your node did fork and now wont accept the proper chain.
"head_block_id": "0001bccd0acdf8b05c58b6cc5e6e36d717de523c",
"head_block_age": "2 seconds old",
"next_maintenance_time": "4 minutes in the future",
"chain_id": "081401ede64c8fe30b23c91d7ab8750103acb1a39548a866fb562f2edf4627d6",Code: [Select]get_block 72042
im curious is your block 72042 matches mine.
{
"previous": "000119696d1d31776e6431586e14fd2e4c0b4f58",
"timestamp": "2015-08-18T02:07:59",
"witness": "1.6.1435",
"next_secret_hash": "72dcf0475fa5d0c145a235f590b5be3597c17753",
"previous_secret": "2813946aacd0bfda5541bf243f5bc47aee51e314",
"transaction_merkle_root": "0000000000000000000000000000000000000000",
"extensions": [],
"witness_signature": "201506e5034d1df8627196bc97b5293eed3efbb6391a256789ca4a6912f169b45b51045f55f9be468a2db485bfcc4952996e10d1f4be0cc1c0eef2764470649035",
"transactions": []
}
locked >>> get_block 72043
get_block 72043
{
"previous": "0001196a1eb9d0e2021fd4f6ac9fe452ea46d4b3",
"timestamp": "2015-08-18T02:08:00",
"witness": "1.6.0",
"next_secret_hash": "031a792684e0d99dab315ce6aea4fa92c85c6707",
"previous_secret": "cd722b49f39a4b7160e081ae2c26cdf71f8e9194",
"transaction_merkle_root": "0000000000000000000000000000000000000000",
"extensions": [],
"witness_signature": "1f64cf6d9e48683ed2259772c5006c0a1ec195dc280c59b5941250fa1581e8b86c0d32ed49d47d9d57cc87b3088e2f52bc15c7dc500955fa334864a3ed7b08be63",
"transactions": []
}
locked >>> get_block 72044
get_block 72044
{
"previous": "0001196bf9cf506fb01511e20c8566694b952a7d",
"timestamp": "2015-08-18T02:08:01",
"witness": "1.6.4",
"next_secret_hash": "42899548ec99624cb2102330c429562510ef09a0",
"previous_secret": "2777b474efe4774a69d716f61b5cf156deb3a232",
"transaction_merkle_root": "0000000000000000000000000000000000000000",
"extensions": [],
"witness_signature": "1f02312f66e1dcf78335d1c5ee999181cd3a0191d927d48376fa5702a493d418514d01eb530328c2dee956d5824103b8576e7974049e0d414eab142fd3ddbbe991",
"transactions": []
}
locked >>>
There could be an issue in code. Maybe current code just throw out an exception in this case. Maybe it's better to temporarily save the new block and request for it's previous block if it's not same as the one in current fork.
Just restarted my node and catching up.
The node is in China, network condition will be OK in next 8 hours or so (mid-night), and may go bad again after that.
//update: my node should have produced a #72042 block but not #72043.
Those who are reporting that they are stuck on a fork I am very interested in hearing more details.If I am reading the logs correctly in my case it was not a generation issue, but a propogation issues. It looks like my node received block 118181 before it got 118180. It tried to push 118181 failed, then got 118180 pushed it successfully, but never went back and tried to push 118181 again so when 118182 came along it also failed to push.
If my understanding is correct it will fork if the block you produced doesn't get included in the final chain. I will try to replicate this behavior in a test.
Does that make sense?
There are no private keys or anything like that in the p2p.log is there? I just want to double check before I post.
https://www.dropbox.com/s/q4abwrm8c96wxtg/p2p.log.fork.8.18.15?dl=0 (https://www.dropbox.com/s/q4abwrm8c96wxtg/p2p.log.fork.8.18.15?dl=0)
The trouble seems to start around 16:27:33
599260ms th_a application.cpp:364 handle_block ] Error when pushing block:
10 assert_exception: Assert Exception
itr != _index.get<block_id>().end():
{}
th_a fork_database.cpp:51 push_block
{"new_block":{"previous":"0001d2f41d0e19149625d60d536830b59d41311e","timestamp":"2015-08-18T16:53:10","witness":"1.6.1439","next_secret_hash":"f94d192447e
29642fd594b22a727c5633b23229e","previous_secret":"76f3d6853731d668c0ca1444c69ee2c4105466a8","transaction_merkle_root":"000000000000000000000000000000000000000
0","extensions":[],"witness_signature":"206adacd5a039ab07d1fd430edb4217c87f0ef29cb85ebe9abf597b264cc961a670c0fd4ddff450fc1bd8b0f1123c147e9bc4af9d4d3738fac8017
8ca543747ef1","transactions":[]}}
th_a db_block.cpp:173 _push_block
599263ms th_a application.cpp:364 handle_block ] Error when pushing block:
10 assert_exception: Assert Exception
itr != _index.get<block_id>().end():
{}
th_a fork_database.cpp:51 push_block
{"new_block":{"previous":"0001d2f5f40d076e7ed6a39cfd843cf6a239f1c9","timestamp":"2015-08-18T16:53:20","witness":"1.6.1439","next_secret_hash":"8c6ed73e105
dfaad5f6334962c4c0438e0990f2b","previous_secret":"57abab205a18e20ecede92a78aaa06565447d370","transaction_merkle_root":"000000000000000000000000000000000000000
0","extensions":[],"witness_signature":"1f3a5728cc46e0e9d418ace7e1647a28c4a8f346a93d513c23934f87c6e367f5fc778c8d9b8b8a3d790a390d9b47ea9a7ec874b604f1809c619a5f
d9dda3fcc70b","transactions":[]}}
th_a db_block.cpp:173 _push_block
599265ms th_a application.cpp:364 handle_block ] Error when pushing block:
10 assert_exception: Assert Exception
itr != _index.get<block_id>().end():
{}
All of it seems to be related to lafonas witness. I can pull logs, or if it would help I can just PM bytemaster with the login credentials of the machine.
I'm getting a whole lot of this on my node running all of the init witnessesCode: [Select]599260ms th_a application.cpp:364 handle_block ] Error when pushing block:
All of it seems to be related to lafonas witness. I can pull logs, or if it would help I can just PM bytemaster with the login credentials of the machine.
10 assert_exception: Assert Exception
itr != _index.get<block_id>().end():
{}
th_a fork_database.cpp:51 push_block
{"new_block":{"previous":"0001d2f41d0e19149625d60d536830b59d41311e","timestamp":"2015-08-18T16:53:10","witness":"1.6.1439","next_secret_hash":"f94d192447e
29642fd594b22a727c5633b23229e","previous_secret":"76f3d6853731d668c0ca1444c69ee2c4105466a8","transaction_merkle_root":"000000000000000000000000000000000000000
0","extensions":[],"witness_signature":"206adacd5a039ab07d1fd430edb4217c87f0ef29cb85ebe9abf597b264cc961a670c0fd4ddff450fc1bd8b0f1123c147e9bc4af9d4d3738fac8017
8ca543747ef1","transactions":[]}}
th_a db_block.cpp:173 _push_block
599263ms th_a application.cpp:364 handle_block ] Error when pushing block:
10 assert_exception: Assert Exception
itr != _index.get<block_id>().end():
{}
th_a fork_database.cpp:51 push_block
{"new_block":{"previous":"0001d2f5f40d076e7ed6a39cfd843cf6a239f1c9","timestamp":"2015-08-18T16:53:20","witness":"1.6.1439","next_secret_hash":"8c6ed73e105
dfaad5f6334962c4c0438e0990f2b","previous_secret":"57abab205a18e20ecede92a78aaa06565447d370","transaction_merkle_root":"000000000000000000000000000000000000000
0","extensions":[],"witness_signature":"1f3a5728cc46e0e9d418ace7e1647a28c4a8f346a93d513c23934f87c6e367f5fc778c8d9b8b8a3d790a390d9b47ea9a7ec874b604f1809c619a5f
d9dda3fcc70b","transactions":[]}}
th_a db_block.cpp:173 _push_block
599265ms th_a application.cpp:364 handle_block ] Error when pushing block:
10 assert_exception: Assert Exception
itr != _index.get<block_id>().end():
{}
slightly off topic, but is there a good resource on proper protocol to post these issues directly to github? I could post directly to github, but I'm such a github noob that I'm not sure I wouldn't be making things worse than just posting here.
Well done puppies. I think we are not that bad bad test witnesses :P
Everyone who has participated in one of these test network and successfully registered a witness, please post your witness ID and BTS account to this thread and I'll send you 1000 brownie pts.+5% 1.6.1435 puppies
This testing has been very helpful. I am actively working on a potential fix for the issues found.
Everyone who has participated in one of these test network and successfully registered a witness, please post your witness ID and BTS account to this thread and I'll send you 1000 brownie pts.witness ID in OP's chain: "1.6.5156"
This testing has been very helpful. I am actively working on a potential fix for the issues found.
Try run a node in a unstable network? With limited bandwidth, or heavy networking load, high latency etc.Well done puppies. I think we are not that bad bad test witnesses :P
+1 You guys have done a great job. If we could find a way to reliably reproduce this with just a few nodes that would be very helpful.
cd ~/graphene/programs/witness_node
wget https://www.dropbox.com/s/zxp2qg0rc9sk1kc/aug-14-test-genesis.json
screen
5. Run the witness ./witness_node --rpc-endpoint "127.0.0.1:8090" --genesis-json aug-14-test-genesis.json -d test_net_puppies -s 45.55.6.216:1776
Ctrl A Ctrl D
7. Extract your wif keys for user and balances as per xeroc's instructions https://github.com/cryptonomex/graphene/wiki/Howto-become-an-active-witness-in-BitShares-2.0cd ~/graphene/programs/cli_wallet
9. Run cli./cli_wallet -w test_wallet_puppies --chain-id 081401ede64c8fe30b23c91d7ab8750103acb1a39548a866fb562f2edf4627d6
Note:screen -r
13. Exit your witness ctrl c./witness_node --rpc-endpoint "127.0.0.1:8090" --genesis-json aug-14-test-genesis.json -d test_net_puppies -s 45.55.6.216:1776 --witness-id '"1.6.5156"' --private-key '["GPH6JhL..your.signing.key..bc5mWyCvERV3coy","5K..your.secret..a"]'
15. See your witness producing blocks and See comment in github: https://github.com/cryptonomex/graphene/issues/247#issuecomment-132349244.
Do you have the p2p logs for this?
asset_id_type test_asset_id = db.get_index<asset_object>().get_next_id();
40 asset_create_operation creator;
41 creator.issuer = account_id_type();
42 creator.fee = asset();
43 creator.symbol = "ADVANCED";
44 creator.common_options.max_supply = 100000000;
45 creator.precision = 2;
46 creator.common_options.market_fee_percent = GRAPHENE_MAX_MARKET_FEE_PERCENT/100; /*1%*/
47 creator.common_options.issuer_permissions = ASSET_ISSUER_PERMISSION_MASK & ~(disable_force_settle|global_settle);
48 creator.common_options.flags = ASSET_ISSUER_PERMISSION_MASK & ~(disable_force_settle|global_settle|transfer_restricted);
49 creator.common_options.core_exchange_rate = price({asset(2),asset(1,1)});
50 creator.common_options.whitelist_authorities = creator.common_options.blacklist_authorities = {account_id_type()};
sudo tc qdisc add dev eth0 root netem delay 1000msWell done puppies. I think we are not that bad bad test witnesses :P
+1 You guys have done a great job. If we could find a way to reliably reproduce this with just a few nodes that would be very helpful.
betax awesome write up! Thank you
First RPi tests: the old 256 MB pi runs out of memory and ends up using lots of swap (no surprise there...) I have a new quad core rpi2 with 1 GB ram to try next, I just need a microsd card.
Everyone who has participated in one of these test network and successfully registered a witness, please post your witness ID and BTS account to this thread and I'll send you 1000 brownie pts.
This testing has been very helpful. I am actively working on a potential fix for the issues found.
I think this number is me.. Which chain are you in?
Registered but not sure if it got voted in with the funds I had :) .
1.6.5155
I think this number is me.. Which chain are you in?
Registered but not sure if it got voted in with the funds I had :) .
1.6.5155
2015-08-19T00:46:34 p2p:message read_loop on_closing_connectio ] Peer 104.200.28.117:61705 is disconnecting us because: You are on a different chain from me n
ode.cpp:2681
2015-08-19T00:46:34 p2p:message read_loop read_loop ] disconnected 0 exception: unspecified
sorry, but which branch of graphene are you use?Im still running the aug 14th test.
I checkout branch aug-17-testnet, but can't connect to the networkCode: [Select]2015-08-19T00:46:34 p2p:message read_loop on_closing_connectio ] Peer 104.200.28.117:61705 is disconnecting us because: You are on a different chain from me n
ode.cpp:2681
2015-08-19T00:46:34 p2p:message read_loop read_loop ] disconnected 0 exception: unspecified
sorry, but which branch of graphene are you use?Im still running the aug 14th test.
I checkout branch aug-17-testnet, but can't connect to the networkCode: [Select]2015-08-19T00:46:34 p2p:message read_loop on_closing_connectio ] Peer 104.200.28.117:61705 is disconnecting us because: You are on a different chain from me n
ode.cpp:2681
2015-08-19T00:46:34 p2p:message read_loop read_loop ] disconnected 0 exception: unspecified
I have just built the aug 17th test net and when I try to launch I get a different chain id. I'm not sure how the chain id is set. I thought it was from the genesis, but I am launching with the same genesis and getting a different result.
yeah. I'll keep it going until bm posts a new one.sorry, but which branch of graphene are you use?Im still running the aug 14th test.
I checkout branch aug-17-testnet, but can't connect to the networkCode: [Select]2015-08-19T00:46:34 p2p:message read_loop on_closing_connectio ] Peer 104.200.28.117:61705 is disconnecting us because: You are on a different chain from me n
ode.cpp:2681
2015-08-19T00:46:34 p2p:message read_loop read_loop ] disconnected 0 exception: unspecified
I have just built the aug 17th test net and when I try to launch I get a different chain id. I'm not sure how the chain id is set. I thought it was from the genesis, but I am launching with the same genesis and getting a different result.
OMG, I just deleted my aug-17 branch and started building aug-14 branch (based on my guess). Will you maintain aug-14 testnet?
I think that I'm in aug-14 testnet. witness id 1.6.1446yup I see you.
Machine type
n1-standard-1 (1 vCPU, 3.75 GB memory)
CPU platform
Intel Ivy Bridge
Zone
asia-east1-c
2071000ms th_a witness.cpp:242 block_production_loo ] Witness 1.6.1446 p
roduction slot has arrived; generating a block now...
2071002ms th_a db_block.cpp:167 _push_block ] Failed to push new
block:
3070000 undo_database_exception: undo database exception
The database does not have enough undo history to support a blockchain with so many missed b
locks. Please add a checkpoint if you would like to continue applying blocks beyond this poi
nt.
{"recently_missed":1431,"max_undo":1000}
th_a db_update.cpp:68 update_global_dynamic_data
{"next_block.block_num()":150301}
th_a db_block.cpp:448 _apply_block
2071003ms th_a witness.cpp:265 block_production_loo ] Got exception whil
e generating block:
3070000 undo_database_exception: undo database exception
The database does not have enough undo history to support a blockchain with so many missed b
locks. Please add a checkpoint if you would like to continue applying blocks beyond this poi
nt.
{"recently_missed":1431,"max_undo":1000}
th_a db_update.cpp:68 update_global_dynamic_data
{"next_block.block_num()":150301}
th_a db_block.cpp:448 _apply_block
{"new_block":{"previous":"00024b1c3d09ca1f5830e3e5afa94ad35ef5bd8e","timestamp":"2015-08
-19T03:34:31","witness":"1.6.1446","next_secret_hash":"2618a50760983d98ab43df78cd9a007d2e7ea
ed7","previous_secret":"e93b540ed63a628df6ef29be551fd9354dac7bed","transaction_merkle_root":
"0000000000000000000000000000000000000000","extensions":[],"witness_signature":"20767d9f7cf2
cf5c83d2540b22943e052a5b0d0bb22af5485462a72d330efe82703a5fe6cef4276f0b5b3f76ea419d87901cefef
89c5878ec99add42aa24ab7071","transactions":[]}}
th_a db_block.cpp:173 _push_block
{"witness_id":"1.6.1446"}
th_a db_block.cpp:312 _generate_block
yeah. I'll keep it going until bm posts a new one.sorry, but which branch of graphene are you use?Im still running the aug 14th test.
I checkout branch aug-17-testnet, but can't connect to the networkCode: [Select]2015-08-19T00:46:34 p2p:message read_loop on_closing_connectio ] Peer 104.200.28.117:61705 is disconnecting us because: You are on a different chain from me n
ode.cpp:2681
2015-08-19T00:46:34 p2p:message read_loop read_loop ] disconnected 0 exception: unspecified
I have just built the aug 17th test net and when I try to launch I get a different chain id. I'm not sure how the chain id is set. I thought it was from the genesis, but I am launching with the same genesis and getting a different result.
OMG, I just deleted my aug-17 branch and started building aug-14 branch (based on my guess). Will you maintain aug-14 testnet?
Can you post what commit works so that if anyone else wants to join in the mean time they can?
bf47a1610c4aef7e25592aa42ecd4d4ae83a2b5f works for sure.
sorry, but which branch of graphene are you use?
I checkout branch aug-17-testnet, but can't connect to the networkCode: [Select]2015-08-19T00:46:34 p2p:message read_loop on_closing_connectio ][b] Peer 104.200.28.117:61705[/b] is disconnecting us because: You are on a different chain from me n
ode.cpp:2681
2015-08-19T00:46:34 p2p:message read_loop read_loop ] disconnected 0 exception: unspecified
{"new_block":{"previous":"000220b6aba8787154761a665409382bbdfd0e8e","timestamp":"2015-08-18T23:31:05","witness":"1.6.1","next_secret_hash":"51d77f1f4ae0f634c379b4c2ebe86281e4e6bbba","previous_secret":"302b516fb1585c52d55e4160217cc87bcdea1839","transaction_merkle_root":"0000000000000000000000000000000000000000","extensions":[],"witness_signature":"1f5f10bb3728f3b4e39bc173d01481859299544c25e66d5784e8197909aa42739f262f20c474c2c377e9ed60a32dfeafedcb4137b7079395cb544d6ab4b71bf8db","transactions":[]}}
th_a db_block.cpp:173 _push_block
1866237ms th_a application.cpp:342 handle_block ] Got block #139448 from network
1866238ms th_a application.cpp:364 handle_block ] Error when pushing block:
10 assert_exception: Assert Exception
itr != _index.get<block_id>().end():
{}
th_a fork_database.cpp:51 push_block
{"new_block":{"previous":"000220b7468dcfdf65df858c2752c938f0bfb1e1","timestamp":"2015-08-18T23:31:06","witness":"1.6.5","next_secret_hash":"51d77f1f4ae0f634c379b4c2ebe86281e4e6bbba","previous_secret":"302b516fb1585c52d55e4160217cc87bcdea1839","transaction_merkle_root":"0000000000000000000000000000000000000000","extensions":[],"witness_signature":"1f571ecd28616e8de828c48647a217bfb52bd28a4c4fc4ab9b5c6ba7277d5d12fc30313342729b0abfc1e3ced296f51539e31c240fb57a39cdbbb0be19956bfee7","transactions":[]}}
th_a db_block.cpp:173 _push_block
1867002ms th_a witness.cpp:239 block_production_loo ] slot: 1667 scheduled_witness: 1.6.5155 scheduled_time: 2015-08-18T23:31:07 now: 2015-08-18T23:31:07
1867007ms th_a application.cpp:342 handle_block ] Got block #139448 from network
1867007ms th_a application.cpp:364 handle_block ] Error when pushing block:
10 assert_exception: Assert Exception
itr != _index.get<block_id>().end():
{}
th_a fork_database.cpp:51 push_block
{"new_block":{"previous":"000220b7468dcfdf65df858c2752c938f0bfb1e1","timestamp":"2015-08-18T23:31:06","witness":"1.6.5","next_secret_hash":"51d77f1f4ae0f634c379b4c2ebe86281e4e6bbba","previous_secret":"302b516fb1585c52d55e4160217cc87bcdea1839","transaction_merkle_root":"0000000000000000000000000000000000000000","extensions":[],"witness_signature":"1f571ecd28616e8de828c48647a217bfb52bd28a4c4fc4ab9b5c6ba7277d5d12fc30313342729b0abfc1e3ced296f51539e31c240fb57a39cdbbb0be19956bfee7","transactions":[]}}
th_a db_block.cpp:173 _push_block
witness_node: /home/azureuser/graphene/libraries/fc/src/thread/thread_d.hpp:370: bool fc::thread_d::start_next_fiber(bool): Assertion `std::current_exception() == std::exception_ptr()' failed.
Aborted (core dumped)
Just woke up to this, witness crashedMe too. Stuck at 160044.Code: [Select]{"new_block":{"previous":"000220b6aba8787154761a665409382bbdfd0e8e","timestamp":"2015-08-18T23:31:05","witness":"1.6.1","next_secret_hash":"51d77f1f4ae0f634c379b4c2ebe86281e4e6bbba","previous_secret":"302b516fb1585c52d55e4160217cc87bcdea1839","transaction_merkle_root":"0000000000000000000000000000000000000000","extensions":[],"witness_signature":"1f5f10bb3728f3b4e39bc173d01481859299544c25e66d5784e8197909aa42739f262f20c474c2c377e9ed60a32dfeafedcb4137b7079395cb544d6ab4b71bf8db","transactions":[]}}
th_a db_block.cpp:173 _push_block
1866237ms th_a application.cpp:342 handle_block ] Got block #139448 from network
1866238ms th_a application.cpp:364 handle_block ] Error when pushing block:
10 assert_exception: Assert Exception
itr != _index.get<block_id>().end():
{}
th_a fork_database.cpp:51 push_block
{"new_block":{"previous":"000220b7468dcfdf65df858c2752c938f0bfb1e1","timestamp":"2015-08-18T23:31:06","witness":"1.6.5","next_secret_hash":"51d77f1f4ae0f634c379b4c2ebe86281e4e6bbba","previous_secret":"302b516fb1585c52d55e4160217cc87bcdea1839","transaction_merkle_root":"0000000000000000000000000000000000000000","extensions":[],"witness_signature":"1f571ecd28616e8de828c48647a217bfb52bd28a4c4fc4ab9b5c6ba7277d5d12fc30313342729b0abfc1e3ced296f51539e31c240fb57a39cdbbb0be19956bfee7","transactions":[]}}
th_a db_block.cpp:173 _push_block
1867002ms th_a witness.cpp:239 block_production_loo ] slot: 1667 scheduled_witness: 1.6.5155 scheduled_time: 2015-08-18T23:31:07 now: 2015-08-18T23:31:07
1867007ms th_a application.cpp:342 handle_block ] Got block #139448 from network
1867007ms th_a application.cpp:364 handle_block ] Error when pushing block:
10 assert_exception: Assert Exception
itr != _index.get<block_id>().end():
{}
th_a fork_database.cpp:51 push_block
{"new_block":{"previous":"000220b7468dcfdf65df858c2752c938f0bfb1e1","timestamp":"2015-08-18T23:31:06","witness":"1.6.5","next_secret_hash":"51d77f1f4ae0f634c379b4c2ebe86281e4e6bbba","previous_secret":"302b516fb1585c52d55e4160217cc87bcdea1839","transaction_merkle_root":"0000000000000000000000000000000000000000","extensions":[],"witness_signature":"1f571ecd28616e8de828c48647a217bfb52bd28a4c4fc4ab9b5c6ba7277d5d12fc30313342729b0abfc1e3ced296f51539e31c240fb57a39cdbbb0be19956bfee7","transactions":[]}}
th_a db_block.cpp:173 _push_block
witness_node: /home/azureuser/graphene/libraries/fc/src/thread/thread_d.hpp:370: bool fc::thread_d::start_next_fiber(bool): Assertion `std::current_exception() == std::exception_ptr()' failed.
Aborted (core dumped)
I have this message (now solved but I think I should report)Looks like you're on a fork. Try resync. Don't add parameter '--enable-stale-production'.Code: [Select]2071000ms th_a witness.cpp:242 block_production_loo ] Witness 1.6.1446 p
roduction slot has arrived; generating a block now...
2071002ms th_a db_block.cpp:167 _push_block ] Failed to push new
block:
3070000 undo_database_exception: undo database exception
The database does not have enough undo history to support a blockchain with so many missed b
locks. Please add a checkpoint if you would like to continue applying blocks beyond this poi
nt.
{"recently_missed":1431,"max_undo":1000}
th_a db_update.cpp:68 update_global_dynamic_data
{"next_block.block_num()":150301}
th_a db_block.cpp:448 _apply_block
2071003ms th_a witness.cpp:265 block_production_loo ] Got exception whil
e generating block:
3070000 undo_database_exception: undo database exception
The database does not have enough undo history to support a blockchain with so many missed b
locks. Please add a checkpoint if you would like to continue applying blocks beyond this poi
nt.
{"recently_missed":1431,"max_undo":1000}
th_a db_update.cpp:68 update_global_dynamic_data
{"next_block.block_num()":150301}
th_a db_block.cpp:448 _apply_block
{"new_block":{"previous":"00024b1c3d09ca1f5830e3e5afa94ad35ef5bd8e","timestamp":"2015-08
-19T03:34:31","witness":"1.6.1446","next_secret_hash":"2618a50760983d98ab43df78cd9a007d2e7ea
ed7","previous_secret":"e93b540ed63a628df6ef29be551fd9354dac7bed","transaction_merkle_root":
"0000000000000000000000000000000000000000","extensions":[],"witness_signature":"20767d9f7cf2
cf5c83d2540b22943e052a5b0d0bb22af5485462a72d330efe82703a5fe6cef4276f0b5b3f76ea419d87901cefef
89c5878ec99add42aa24ab7071","transactions":[]}}
th_a db_block.cpp:173 _push_block
{"witness_id":"1.6.1446"}
th_a db_block.cpp:312 _generate_block
The following is what I have done
- Start witness_node and sync blockchain
- Vote for 1.6.1446, close cli_wallet
- Unclean shutdown (Ctrl+C, Ctrl+C)
- Delete object_id(?) and witness_node dir
- Start witness_node
- After syncing, this problem begins to happen
unlocked >>> info
info
{
"head_block_num": 161438,
"head_block_id": "0002769e3c40d48424a73da31340397dcb3ca150",
"head_block_age": "2 seconds old",
"next_maintenance_time": "4 minutes in the future",
"chain_id": "081401ede64c8fe30b23c91d7ab8750103acb1a39548a866fb562f2edf4627d6",
"active_witnesses": [
"1.6.0",
"1.6.1",
"1.6.2",
"1.6.3",
"1.6.4",
"1.6.1435",
"1.6.1439",
"1.6.1446",
"1.6.5155",
"1.6.5156"
],
"active_committee_members": [
"1.5.10",
"1.5.0",
"1.5.1",
"1.5.2",
"1.5.3",
"1.5.4",
"1.5.5",
"1.5.6",
"1.5.7",
"1.5.8"
],
"entropy": "64d465fa11e74ebc2148dbe6911f6c6f0d3b79ad"
}
network is still up, but we are running at 70%.I restarted and caught up. How to see that 70%?Code: [Select]unlocked >>> info
info
{
"head_block_num": 161438,
"head_block_id": "0002769e3c40d48424a73da31340397dcb3ca150",
"head_block_age": "2 seconds old",
"next_maintenance_time": "4 minutes in the future",
"chain_id": "081401ede64c8fe30b23c91d7ab8750103acb1a39548a866fb562f2edf4627d6",
"active_witnesses": [
"1.6.0",
"1.6.1",
"1.6.2",
"1.6.3",
"1.6.4",
"1.6.1435",
"1.6.1439",
"1.6.1446",
"1.6.5155",
"1.6.5156"
],
"active_committee_members": [
"1.5.10",
"1.5.0",
"1.5.1",
"1.5.2",
"1.5.3",
"1.5.4",
"1.5.5",
"1.5.6",
"1.5.7",
"1.5.8"
],
"entropy": "64d465fa11e74ebc2148dbe6911f6c6f0d3b79ad"
}
I just watch my witness node and count.network is still up, but we are running at 70%.I restarted and caught up. How to see that 70%?Code: [Select]unlocked >>> info
info
{
"head_block_num": 161438,
"head_block_id": "0002769e3c40d48424a73da31340397dcb3ca150",
"head_block_age": "2 seconds old",
"next_maintenance_time": "4 minutes in the future",
"chain_id": "081401ede64c8fe30b23c91d7ab8750103acb1a39548a866fb562f2edf4627d6",
"active_witnesses": [
"1.6.0",
"1.6.1",
"1.6.2",
"1.6.3",
"1.6.4",
"1.6.1435",
"1.6.1439",
"1.6.1446",
"1.6.5155",
"1.6.5156"
],
"active_committee_members": [
"1.5.10",
"1.5.0",
"1.5.1",
"1.5.2",
"1.5.3",
"1.5.4",
"1.5.5",
"1.5.6",
"1.5.7",
"1.5.8"
],
"entropy": "64d465fa11e74ebc2148dbe6911f6c6f0d3b79ad"
}
3312060ms th_a application.cpp:342 handle_block ] Got block #163304 from network
3313000ms th_a witness.cpp:239 block_production_loo ] slot: 1 scheduled_witness: 1.6.0 scheduled_time: 2015-08-19T07:55:13 now: 2015-08-19T07:55:13
3313060ms th_a application.cpp:342 handle_block ] Got block #163305 from network
3314000ms th_a witness.cpp:239 block_production_loo ] slot: 1 scheduled_witness: 1.6.4 scheduled_time: 2015-08-19T07:55:14 now: 2015-08-19T07:55:14
3314061ms th_a application.cpp:342 handle_block ] Got block #163306 from network
3315000ms th_a witness.cpp:239 block_production_loo ] slot: 1 scheduled_witness: 1.6.5156 scheduled_time: 2015-08-19T07:55:15 now: 2015-08-19T07:55:15
3316000ms th_a witness.cpp:239 block_production_loo ] slot: 2 scheduled_witness: 1.6.1439 scheduled_time: 2015-08-19T07:55:16 now: 2015-08-19T07:55:16
3316267ms th_a application.cpp:342 handle_block ] Got block #163307 from network
3317000ms th_a witness.cpp:239 block_production_loo ] slot: 1 scheduled_witness: 1.6.1435 scheduled_time: 2015-08-19T07:55:17 now: 2015-08-19T07:55:17
3317001ms th_a witness.cpp:242 block_production_loo ] Witness 1.6.1435 production slot has arrived; generating a block now...
3317002ms th_a witness.cpp:255 block_production_loo ] Generated block #163308 with timestamp 2015-08-19T07:55:17 at time 2015-08-19T07:55:17
3317041ms th_a application.cpp:437 get_item ] Request for item {"item_type":1001,"item_hash":"00027decc0281ab06abb0033a6e8863aa5464674"}
3317041ms th_a application.cpp:445 get_item ] Serving up block #163308
3318000ms th_a witness.cpp:239 block_production_loo ] slot: 1 scheduled_witness: 1.6.2 scheduled_time: 2015-08-19T07:55:18 now: 2015-08-19T07:55:18
3318057ms th_a application.cpp:342 handle_block ] Got block #163309 from network
3319000ms th_a witness.cpp:239 block_production_loo ] slot: 1 scheduled_witness: 1.6.1446 scheduled_time: 2015-08-19T07:55:19 now: 2015-08-19T07:55:19
3320000ms th_a witness.cpp:239 block_production_loo ] slot: 2 scheduled_witness: 1.6.5155 scheduled_time: 2015-08-19T07:55:20 now: 2015-08-19T07:55:20
3321000ms th_a witness.cpp:239 block_production_loo ] slot: 3 scheduled_witness: 1.6.0 scheduled_time: 2015-08-19T07:55:21 now: 2015-08-19T07:55:21
and I watch for the while lines in between the orange ones.
Everyone who has participated in one of these test network and successfully registered a witness, please post your witness ID and BTS account to this thread and I'll send you 1000 brownie pts.
This testing has been very helpful. I am actively working on a potential fix for the issues found.
get_witness delegate.ihashfury
{
"id": "1.6.1504",
"witness_account": "1.2.22277",
upgrade_account <accountname> true
Is this, because the account is a lifetime member because being in the founderLifetime member is different from being a founder!
block?
How much is the fee to create a witness oryou need funds (worthless CORE in a testnet) because you need to
is creation free and only the voting then needs funding? A little confused about
all this right now...
Several other questions arise around running the node which maybe are commonQuestions belong in the forum. Just post them here .. or maybe better start a
ground for the experienced delegates. Is there a place to ask them (don't want
to spam this thread) - should I start a new one or is there someone that can
answer via PM?
@mudshark79
you need to upgrade to a lifetime member firstCode: [Select]upgrade_account <accountname> true
Everyone who has participated in one of these test network and successfully registered a witness, please post your witness ID and BTS account to this thread and I'll send you 1000 brownie pts.
This testing has been very helpful. I am actively working on a potential fix for the issues found.
The firey death of the test net stopped me getting voted in :PCode: [Select]get_witness delegate.ihashfury
{
"id": "1.6.1504",
"witness_account": "1.2.22277",
Is there an eta for a new offical test net?
2295235ms th_a main.cpp:117 main ] Writing new config file at /home/james/data/github/graphene/programs/witness_node/.graphene/config.ini
2295236ms th_a witness.cpp:70 plugin_initialize ] key_id_to_wif_pair: ["GPH6MRyAjQqweee","5KwrPbwdL6PhXuwoho"]
2295237ms th_a application.cpp:228 operator() ] Initializing database...
2321896ms th_a thread.cpp:95 thread ] name:ntp tid:139671886788352
2321900ms ntp ntp.cpp:77 request_now ] resolving... ["pool.ntp.org",123]
2321902ms th_a thread.cpp:95 thread ] name:p2p tid:139671857415936
2321909ms th_a application.cpp:129 reset_p2p_node ] Configured p2p node to listen on 0.0.0.0:33569
2321910ms th_a witness.cpp:143 plugin_startup ] No witnesses configured! Please add witness IDs and private keys to configuration.
2321910ms th_a main.cpp:165 main ] Started witness node on a chain with 0 blocks.
2321910ms th_a main.cpp:166 main ] Chain ID is a629fd737e0b4d8becc25e33b90984840723b38cd4430049693059f6e396b89c
2321926ms ntp ntp.cpp:81 request_now ] sending request to 96.44.142.5:123
2321971ms ntp ntp.cpp:147 read_loop ] received ntp reply from 96.44.142.5:123
2321971ms ntp ntp.cpp:161 read_loop ] ntp offset: -7788, round_trip_delay 45567
I'm there and producing.
Try deleting your witness data directory (i called mine something like testnet_puppies), and restart, that might help.
You are correct, there are not many people at the moment, consider this as the trial and error setting yourself up before the big testnet.
Everyone who has participated in one of these test network and successfully registered a witness, please post your witness ID and BTS account to this thread and I'll send you 1000 brownie pts.
This testing has been very helpful. I am actively working on a potential fix for the issues found.
witness_node --rpc-endpoint "127.0.0.1:8090" --p2p-endpoint "127.0.0.1:61705" --genesis-json aug-14-test-genesis.json -d test_net_puppies -s 45.55.6.216:1776
rm test_net_puppies/ -fr
Chain ID is a629fd737e0b4d8becc25e33b90984840723b38cd4430049693059f6e396b89c
Another way to specify incoming port is adding parameter --rpc-endpoint 0.0.0.0:your_port.I'm there and producing.
Try deleting your witness data directory (i called mine something like testnet_puppies), and restart, that might help.
You are correct, there are not many people at the moment, consider this as the trial and error setting yourself up before the big testnet.
Got it resynced an am producing blocks again. One of my remaining questions is:
Does a NAT-Situation hinder smooth operation? I can set P2P-Bind in config, is there such thing as a public address? (It's clear that running a node "at home" is not for production use but this was my first try - will prepare a VPS for next version) As I see nodes being connected on my p2p-incoming port I assume it somehow works anyway but I would like to get a better understanding...
Is there a wallet-command for listing Nodes in detail? I did netstat to get an overall status...
Code: [Select]Chain ID is a629fd737e0b4d8becc25e33b90984840723b38cd4430049693059f6e396b89c
cli_wallet will not load as its a different chain
rm -rf object_database*
Is this command useful? @RiverheadCode: [Select]rm -rf object_database*
Is this command useful? @RiverheadCode: [Select]rm -rf object_database*
A good suggestion but it did not take me off 9c. It did however widen my scope outside of the data directory for my reset script so thanks for that :).
./cli_wallet -w wallet-testnet-puppies.json --chain-id 081401ede64c8fe30b23c91d7ab8750103acb1a39548a866fb562f2edf4627d6
Is this command useful? @RiverheadCode: [Select]rm -rf object_database*
A good suggestion but it did not take me off 9c. It did however widen my scope outside of the data directory for my reset script so thanks for that :) .
Is your wallet launched with this command?Code: [Select]./cli_wallet -w wallet-testnet-puppies.json --chain-id 081401ede64c8fe30b23c91d7ab8750103acb1a39548a866fb562f2edf4627d6
./witness_node -d .graphene --genesis-json '"aug-14-test-genesis.json"'
2125632ms th_a witness.cpp:70 plugin_initialize ] key_id_to_wif_pair: ["GPH7kNZtp64ZR1R4yC2w9.","5J.Qw8pmqgsK"]
2131814ms th_a thread.cpp:95 thread ] name:ntp tid:140335251613440
2131815ms ntp ntp.cpp:77 request_now ] resolving... ["pool.ntp.org",123]
2131815ms th_a thread.cpp:95 thread ] name:p2p tid:140335228999424
2131860ms ntp ntp.cpp:81 request_now ] sending request to 50.116.36.122:123
2131860ms th_a application.cpp:117 reset_p2p_node ] Adding seed node 45.55.6.216:1776
2131862ms th_a application.cpp:129 reset_p2p_node ] Configured p2p node to listen on 127.0.0.1:8090
2131863ms th_a application.cpp:179 reset_websocket_serv ] Configured websocket rpc to listen on 127.0.0.1:8091
2131863ms th_a witness.cpp:143 plugin_startup ] No witnesses configured! Please add witness IDs and private keys to configuration.
2131863ms th_a main.cpp:165 main ] Started witness node on a chain with 0 blocks.
2131863ms th_a main.cpp:166 main ] Chain ID is a629fd737e0b4d8becc25e33b90984840723b38cd4430049693059f6e396b89c
2131885ms ntp ntp.cpp:147 read_loop ] received ntp reply from 50.116.36.122:123
2131885ms ntp ntp.cpp:161 read_loop ] ntp offset: -2120353, round_trip_delay 25204
2131886ms ntp ntp.cpp:177 read_loop ] ntp_delta_time updated to -2120353
I was building on another computer yesterday, and had to switch back to a git head from the 14th in order to get it to connect with the correct chain-id. Checking out and building bf47a1610c4aef7e25592aa42ecd4d4ae83a2b5f worked, but I still don't know how the chain-id is derived https://github.com/cryptonomex/graphene/wiki/chain-locked-tx (https://github.com/cryptonomex/graphene/wiki/chain-locked-tx) explains why they are important, but not sure how they are generated.Is this command useful? @RiverheadCode: [Select]rm -rf object_database*
A good suggestion but it did not take me off 9c. It did however widen my scope outside of the data directory for my reset script so thanks for that :) .
Is your wallet launched with this command?Code: [Select]./cli_wallet -w wallet-testnet-puppies.json --chain-id 081401ede64c8fe30b23c91d7ab8750103acb1a39548a866fb562f2edf4627d6
I'm not running the wallet currently - just the witness_node.Code: [Select]
./witness_node -d .graphene --genesis-json '"aug-14-test-genesis.json"'
2125632ms th_a witness.cpp:70 plugin_initialize ] key_id_to_wif_pair: ["GPH7kNZtp64ZR1R4yC2w9.","5J.Qw8pmqgsK"]
2131814ms th_a thread.cpp:95 thread ] name:ntp tid:140335251613440
2131815ms ntp ntp.cpp:77 request_now ] resolving... ["pool.ntp.org",123]
2131815ms th_a thread.cpp:95 thread ] name:p2p tid:140335228999424
2131860ms ntp ntp.cpp:81 request_now ] sending request to 50.116.36.122:123
2131860ms th_a application.cpp:117 reset_p2p_node ] Adding seed node 45.55.6.216:1776
2131862ms th_a application.cpp:129 reset_p2p_node ] Configured p2p node to listen on 127.0.0.1:8090
2131863ms th_a application.cpp:179 reset_websocket_serv ] Configured websocket rpc to listen on 127.0.0.1:8091
2131863ms th_a witness.cpp:143 plugin_startup ] No witnesses configured! Please add witness IDs and private keys to configuration.
2131863ms th_a main.cpp:165 main ] Started witness node on a chain with 0 blocks.
2131863ms th_a main.cpp:166 main ] Chain ID is a629fd737e0b4d8becc25e33b90984840723b38cd4430049693059f6e396b89c
2131885ms ntp ntp.cpp:147 read_loop ] received ntp reply from 50.116.36.122:123
2131885ms ntp ntp.cpp:161 read_loop ] ntp offset: -2120353, round_trip_delay 25204
2131886ms ntp ntp.cpp:177 read_loop ] ntp_delta_time updated to -2120353
git checkout origin/aug-17-testnet
aug-17-testnet will give you chain id a629fd737e0b4d8becc25e33b90984840723b38cd4430049693059f6e396b89c (at least it did for me)Code: [Select]So do I need to checkout aug-17-testnet on git
[codegit checkout origin/aug-17-testnet
building now
aug-17-testnet will give you chain id a629fd737e0b4d8becc25e33b90984840723b38cd4430049693059f6e396b89c (at least it did for me)Code: [Select]So do I need to checkout aug-17-testnet on git
[codegit checkout origin/aug-17-testnet
building now
git checkout bf47a1610c4aef7e25592aa42ecd4d4ae83a2b5f
is what worked for me.
aug-17-testnet has 100 witnesses and will hopefull be more resistant to whatever caused the death of aug-14-testnetI am down to switch whenever everyone else is. Please don't keep this alive on my account. I just wanted to continue testing till we got another official test net.
aug-17-testnet has 100 witnesses and will hopefull be more resistant to whatever caused the death of aug-14-testnet
there's no server node for the August 17th test_net. I can try to start one, if everyone is interested.aug-17-testnet has 100 witnesses and will hopefull be more resistant to whatever caused the death of aug-14-testnet
So is 9c the correct chain then? I wasn't able to sync - but that could be a network issue on my VM.
Update: Aug-14 is syncing now. However, I should probably get Aug-17 working instead.
aug-17-testnet has 100 witnesses and will hopefull be more resistant to whatever caused the death of aug-14-testnet
user@user-desktop:~/src/graphene/programs/cli_wallet$ ./cli_wallet -w wallet -s 104.200.28.117:61705Logging RPC to file: logs/rpc/rpc.log
3362460ms th_a main.cpp:111 main ] key_to_wif( committee_private_key ): 5KCBDTcyDqzsqehcb52tW5nU6pXife6V2rX9Yf7c3saYSzbDZ5W
3362460ms th_a main.cpp:115 main ] nathan_pub_key: GPH6MRyAjQq8ud7hVNYcfnVPJqcVpscN5So8BhtHuGYqET5GDW5CV
3362460ms th_a main.cpp:116 main ] key_to_wif( nathan_private_key ): 5KQwrPbwdL6PhXujxW37FSSQZ1JiwsST4cqQzDeyXtP79zkvFD3
3362564ms th_a main.cpp:163 main ] wdata.ws_server: 104.200.28.117:61705
10 assert_exception: Assert Exception
uri.substr(0,3) == "ws:":
{}
th_a websocket.cpp:585 connect
{"uri":"104.200.28.117:61705"}
th_a websocket.cpp:606 connect
user@user-desktop:~/src/graphene/programs/cli_wallet$ ./cli_wallet -w wallet -s 104.200.28.117:8090
Logging RPC to file: logs/rpc/rpc.log
3372945ms th_a main.cpp:111 main ] key_to_wif( committee_private_key ): 5KCBDTcyDqzsqehcb52tW5nU6pXife6V2rX9Yf7c3saYSzbDZ5W
3372946ms th_a main.cpp:115 main ] nathan_pub_key: GPH6MRyAjQq8ud7hVNYcfnVPJqcVpscN5So8BhtHuGYqET5GDW5CV
3372946ms th_a main.cpp:116 main ] key_to_wif( nathan_private_key ): 5KQwrPbwdL6PhXujxW37FSSQZ1JiwsST4cqQzDeyXtP79zkvFD3
3373048ms th_a main.cpp:163 main ] wdata.ws_server: 104.200.28.117:8090
10 assert_exception: Assert Exception
uri.substr(0,3) == "ws:":
{}
th_a websocket.cpp:585 connect
{"uri":"104.200.28.117:8090"}
th_a websocket.cpp:606 connect
I will start working on getting an aug 17th seed node up, unless someone else wants to volunteer
I have pushed an update that will hopefully resolve the out-of-order syncing issues.Is there a seed node or should I just build from master and make one?
Those of you who have gotten good at these test networks can probably test it out for me and let me know if you still have problems.
No reason to hold back on aug-17 network. Those who know about it, please provide instructions for others to join it.
Note that the aug 17 network does not currently have the fixes I introduced today for the out-of-order block pushing. I will ask Vikram to update his node when he comes in this afternoon.
You need the genesis block for aug17 .. could someone not mobile please provide a download link @bytemaster
Then you can do prety much what is stated in the op with the aug17 snapshot
git checkout 15c99bd65b0c90854d430126582fee49ec645fe6
git submodule update --init --recursive
cmake -DBOOST_ROOT="$BOOST_ROOT" -DCMAKE_BUILD_TYPE=Debug .
make
./witness_node -d test_net --genesis-json /home/user/src/8.19/graphene/programs/witness_node/aug-19-puppies-test-genesis.json
to help testing, I set up an ubuntu daily build of graphene master
https://bitsharestalk.org/index.php/topic,18039.new.htm
maybe will help a couple more people get involved in testing
Yeah. I tried to make a play ppa that would contain both cli and gui files, for a couple of days and couldn't get it to work. I even used your bitshares ppa as a template and couldn't get it right. If you can whip one up so easily theres a million pls in it for you (I think)to help testing, I set up an ubuntu daily build of graphene master
https://bitsharestalk.org/index.php/topic,18039.new.htm
maybe will help a couple more people get involved in testing
You should go to the play forum, there is a bounty waiting for you. :)
Well...
+5% votedWell...
Uhhhhh, Probably first Windows witness up and running :)
Witness id: 1.6.4435 or simple testz
Please vote this witness in.
PS: Tomorrow I will try to summarize what I done and maybe will come with small instruction how to build and run witness under Windows
Also, to help with testing I'm pushing Docker containers for witness and cli in an automated fashion. I will post instructions on how to join using docker later tonight.
https://bitsharestalk.org/index.php/topic,17935.0.html
Here we are...Awesome, how did you build?
(http://picpaste.com/pics/witnesses-rUdOJvqP.1440015806.PNG)
Well...
Uhhhhh, Probably first Windows witness up and running :)
Witness id: 1.6.4435 or simple testz
Please vote this witness in.
PS: Tomorrow I will try to summarize what I done and maybe will come with small instruction how to build and run witness under Windows
Well...Great thanks. +5% Building..
Until we get something official, I have a new version up and running with 100 init witnessess. I manually edited the genesis, and everything seems to be working, but I really had no idea what I was doing. https://www.dropbox.com/s/xzlnoyn4tdpdede/aug-19-puppies-test-genesis.json?dl=0 (https://www.dropbox.com/s/xzlnoyn4tdpdede/aug-19-puppies-test-genesis.json?dl=0)
The specific tag I built was 15c99bd65b0c90854d430126582fee49ec645fe6, so if you are having a hard time getting on the same chain-id I would suggest aCode: [Select]git checkout 15c99bd65b0c90854d430126582fee49ec645fe6
git submodule update --init --recursive
cmake -DBOOST_ROOT="$BOOST_ROOT" -DCMAKE_BUILD_TYPE=Debug .
make
I was also having a hard time getting the genesis.json to load. Using the entire path in the command seems to work so for exampleCode: [Select]./witness_node -d test_net --genesis-json /home/user/src/8.19/graphene/programs/witness_node/aug-19-puppies-test-genesis.json
my server is still 45.55.6.216:1776
As always I am running on Ubuntu 14.04. I also have exactly 0 formal computer science training, so if I am doing something wrong please let me know. I am a learn by doing type of guy, and that includes making lots of mistakes.
and the chain id is 5508f5f743717fe2c78445364f62a72badd7532974d26f089af2062228b532eb but its kinda a pain to get the genesis.json to load and give you that chain-id
Here we are...Awesome, how did you build?
(http://picpaste.com/pics/witnesses-rUdOJvqP.1440015806.PNG)
Well...
Uhhhhh, Probably first Windows witness up and running :)
Witness id: 1.6.4435 or simple testz
Please vote this witness in.
PS: Tomorrow I will try to summarize what I done and maybe will come with small instruction how to build and run witness under Windows
{
"head_block_num": 11773,
"head_block_id": "00002dfdab35043defebc42a6ae9d5a96767fd97",
"head_block_age": "1 second old",
"next_maintenance_time": "82 seconds in the future",
"chain_id": "5508f5f743717fe2c78445364f62a72badd7532974d26f089af2062228b532eb",
"active_witnesses": [
"1.6.0",
"1.6.1",
"1.6.2",
"1.6.3",
"1.6.4",
"1.6.5",
"1.6.6",
"1.6.1525",
"1.6.4435",
"1.6.5246",
"1.6.5247"
],
"active_committee_members": [
"1.5.0",
"1.5.1",
"1.5.2",
"1.5.3",
"1.5.4",
"1.5.5",
"1.5.6",
"1.5.7",
"1.5.8",
"1.5.9"
],
"entropy": "73f47170212775453a20de79b10d927e9d3b761d"
}
My node is online now. Witness id 1.6.5247.for some reason the info and get_global_properties screen dont show all witnesses. There may have been something I should have changed in the genesis. There are 101 delegate slots though.
But number of active witnesses is still 10?Code: [Select]{
"head_block_num": 11773,
"head_block_id": "00002dfdab35043defebc42a6ae9d5a96767fd97",
"head_block_age": "1 second old",
"next_maintenance_time": "82 seconds in the future",
"chain_id": "5508f5f743717fe2c78445364f62a72badd7532974d26f089af2062228b532eb",
"active_witnesses": [
"1.6.0",
"1.6.1",
"1.6.2",
"1.6.3",
"1.6.4",
"1.6.5",
"1.6.6",
"1.6.1525",
"1.6.4435",
"1.6.5246",
"1.6.5247"
],
"active_committee_members": [
"1.5.0",
"1.5.1",
"1.5.2",
"1.5.3",
"1.5.4",
"1.5.5",
"1.5.6",
"1.5.7",
"1.5.8",
"1.5.9"
],
"entropy": "73f47170212775453a20de79b10d927e9d3b761d"
}
By looking at the screen of witness_node, it's true that only the 11 witnesses are producing blocks.My node is online now. Witness id 1.6.5247.for some reason the info and get_global_properties screen dont show all witnesses. There may have been something I should have changed in the genesis. There are 101 delegate slots though.
But number of active witnesses is still 10?Code: [Select]{
"head_block_num": 11773,
"head_block_id": "00002dfdab35043defebc42a6ae9d5a96767fd97",
"head_block_age": "1 second old",
"next_maintenance_time": "82 seconds in the future",
"chain_id": "5508f5f743717fe2c78445364f62a72badd7532974d26f089af2062228b532eb",
"active_witnesses": [
"1.6.0",
"1.6.1",
"1.6.2",
"1.6.3",
"1.6.4",
"1.6.5",
"1.6.6",
"1.6.1525",
"1.6.4435",
"1.6.5246",
"1.6.5247"
],
"active_committee_members": [
"1.5.0",
"1.5.1",
"1.5.2",
"1.5.3",
"1.5.4",
"1.5.5",
"1.5.6",
"1.5.7",
"1.5.8",
"1.5.9"
],
"entropy": "73f47170212775453a20de79b10d927e9d3b761d"
}
I had no idea it was so easy. I have been messing around with the light wallet instructions on the build ubuntu page with no real luck.Here we are...Awesome, how did you build?
(http://picpaste.com/pics/witnesses-rUdOJvqP.1440015806.PNG)
Using the graphene-ui build instructions :) in github.
As I am using ubuntu in a vps I changed server.js to use 0.0.0.0, also to use prod.
My witnesses uses for rpc 0.0.0.0 instead localhost / 127.0.0.1
By looking at the screen of witness_node, it's true that only the 11 witnesses are producing blocks.My node is online now. Witness id 1.6.5247.for some reason the info and get_global_properties screen dont show all witnesses. There may have been something I should have changed in the genesis. There are 101 delegate slots though.
But number of active witnesses is still 10?Code: [Select]{
"head_block_num": 11773,
"head_block_id": "00002dfdab35043defebc42a6ae9d5a96767fd97",
"head_block_age": "1 second old",
"next_maintenance_time": "82 seconds in the future",
"chain_id": "5508f5f743717fe2c78445364f62a72badd7532974d26f089af2062228b532eb",
"active_witnesses": [
"1.6.0",
"1.6.1",
"1.6.2",
"1.6.3",
"1.6.4",
"1.6.5",
"1.6.6",
"1.6.1525",
"1.6.4435",
"1.6.5246",
"1.6.5247"
],
"active_committee_members": [
"1.5.0",
"1.5.1",
"1.5.2",
"1.5.3",
"1.5.4",
"1.5.5",
"1.5.6",
"1.5.7",
"1.5.8",
"1.5.9"
],
"entropy": "73f47170212775453a20de79b10d927e9d3b761d"
}
//Edit: it's 11, not 10
Do you have a URL so others could access the GUI/webwallet?Here we are...Awesome, how did you build?
(http://picpaste.com/pics/witnesses-rUdOJvqP.1440015806.PNG)
Using the graphene-ui build instructions :) in github.
As I am using ubuntu in a vps I changed server.js to use 0.0.0.0, also to use prod.
My witnesses uses for rpc 0.0.0.0 instead localhost / 127.0.0.1
Thats funny because I had 100 going earlier. I could tell because I set the genesis to 101, but set my server witness node to 100, and so 1.6.100 was missing blocks every single rotation.
graphene::chain::signed_transaction set_desired_witness_and_committee_member_count(string, uint16_t, uint16_t, bool)
Don't know what it will do though.
I have played around with that command a little bit. It throws an error if you set the desired number to more than what that account is currently voting for.Thats funny because I had 100 going earlier. I could tell because I set the genesis to 101, but set my server witness node to 100, and so 1.6.100 was missing blocks every single rotation.
There is a command in help():Code: [Select]graphene::chain::signed_transaction set_desired_witness_and_committee_member_count(string, uint16_t, uint16_t, bool)
Don't know what it will do though.
I've set up the web wallet at 45.55.6.216:8080 same node as the init witnesses so don't crash it.
my witness node stopped producing blocks as well.
my witness node stopped producing blocks as well.
866714cfbbf","000066e44370fae9a1b02625480966b57cc3458a","0000672466da61511d9ef8a386217b1e7235f519","00006744a726781b14726dd82437cd3dff493a50","0000675411988d1632d109b69917632bae5dc68c","0000675c7c19cb50b481a1522b14212368228847","0000676053c543bdb9e0cea06d48328634701b33","000067625f90d27d73ee18d5c3192b123524a87a","00006763c44fd2d25cd3b9966657e194abb6c36c","000067642e6bb2e49584d494b243f7aebb8d3d2b"]
2586746ms th_a application.cpp:492 get_blockchain_synop ] reference_point: 000067642e6bb2e49584d494b243f7aebb8d3d2b number_of_blocks_after_reference_point: 7996 result: ["00002764769c3f0b69e953b5feb889a81247f3b8","00004764b53d527f3af9ef82b5a6f8a98812cfff","0000576494e36a391539acb34b2912a2003a7271","00005f64142465bf62f511966b0064e10e53cad0","00006364751fdb8467125d8603c8847d3b0082c4","00006564f0cfc6e21490e40e368a5f833ab4c9dc","000066640e3d1b4bce1cdc389291a866714cfbbf","000066e44370fae9a1b02625480966b57cc3458a","0000672466da61511d9ef8a386217b1e7235f519","00006744a726781b14726dd82437cd3dff493a50","0000675411988d1632d109b69917632bae5dc68c","0000675c7c19cb50b481a1522b14212368228847","0000676053c543bdb9e0cea06d48328634701b33","000067625f90d27d73ee18d5c3192b123524a87a","00006763c44fd2d25cd3b9966657e194abb6c36c","000067642e6bb2e49584d494b243f7aebb8d3d2b"]
2586995ms th_a application.cpp:492 get_blockchain_synop ] reference_point: 000067642e6bb2e49584d494b243f7aebb8d3d2b number_of_blocks_after_reference_point: 9995 result: ["00002764769c3f0b69e953b5feb889a81247f3b8","00004764b53d527f3af9ef82b5a6f8a98812cfff","0000576494e36a391539acb34b2912a2003a7271","00005f64142465bf62f511966b0064e10e53cad0","00006364751fdb8467125d8603c8847d3b0082c4","00006564f0cfc6e21490e40e368a5f833ab4c9dc","000066640e3d1b4bce1cdc389291a866714cfbbf","000066e44370fae9a1b02625480966b57cc3458a","0000672466da61511d9ef8a386217b1e7235f519","00006744a726781b14726dd82437cd3dff493a50","0000675411988d1632d109b69917632bae5dc68c","0000675c7c19cb50b481a1522b14212368228847","0000676053c543bdb9e0cea06d48328634701b33","000067625f90d27d73ee18d5c3192b123524a87a","00006763c44fd2d25cd3b9966657e194abb6c36c","000067642e6bb2e49584d494b243f7aebb8d3d2b"]
2587004ms th_a witness.cpp:240 block_production_loo ] slot: 13978 scheduled_witness: 1.6.0 scheduled_time: 2015-08-20T05:43:07 now: 2015-08-20T05:43:07
2588007ms th_a witness.cpp:240 block_production_loo ] slot: 13979 scheduled_witness: 1.6.52 scheduled_time: 2015-08-20T05:43:08 now: 2015-08-20T05:43:08
2589008ms th_a witness.cpp:240 block_production_loo ] slot: 13980 scheduled_witness: 1.6.5249 scheduled_time: 2015-08-20T05:43:09 now: 2015-08-20T05:43:09
2590005ms th_a witness.cpp:240 block_production_loo ] slot: 13981 scheduled_witness: 1.6.5248 scheduled_time: 2015-08-20T05:43:10 now: 2015-08-20T05:43:10
2590157ms th_a application.cpp:370 handle_block ] Error when pushing block:
10 assert_exception: Assert Exception
item->num > std::max<int64_t>( 0, int64_t(_head->num) - (_max_size) ):
{}
th_a fork_database.cpp:67 _push_block
{"new_block":{"previous":"000067642e6bb2e49584d494b243f7aebb8d3d2b","timestamp":"2015-08-20T01:50:10","witness":"1.6.5249","next_secret_hash":"f4f468fd3e4fb366dd2d972a915db1b3d5345c71","previous_secret":"2bca37cf125efd56697843c3cba65bc9f7075a20","transaction_merkle_root":"0000000000000000000000000000000000000000","extensions":[],"witness_signature":"2056ea809a37ebb3f8963eedc965e57d640719b787d13583eab4b0c8215c7f13e470eb42c2ac842951201f86b165eca75180a161dfb8fbb981b490e3bdd467b19f","transactions":[]}}
th_a db_block.cpp:176 _push_block
Probably my spam transactions caused the problem.
Do you have a URL so others could access the GUI/webwallet?Here we are...Awesome, how did you build?
(http://picpaste.com/pics/witnesses-rUdOJvqP.1440015806.PNG)
Using the graphene-ui build instructions :) in github.
As I am using ubuntu in a vps I changed server.js to use 0.0.0.0, also to use prod.
My witnesses uses for rpc 0.0.0.0 instead localhost / 127.0.0.1
Probably my spam transactions caused the problem.
Yes, and I can't resync to the network, witness_node crashes, so if devs will look to our test net they will found something interesting. :)
I was able to delete my files and resync the witness, then I turned block production back on and it seems to be working now.
Here are some examples of how to use docker to run the witness:
Start a witness, all logs and data will be dumped right after execution stops:
docker run -it --rm sile16/graphene-witness:aug19test --genesis-json /aug-19-puppies-test-genesis.json -s 45.55.6.216:1776
(this docker image i included the genesis json in)
Start a witness with persistent data dir:
docker run -it --rm -v <local data dir>:/witness_node_data_dir sile16/graphene-witness:aug19test --genesis-json /aug-19-puppies-test-genesis.json -s 45.55.6.216:1776
to make it easier to expose ports you can also include --net=host
docker run -it --rm --net=host -v /witness_node/test_net2:/witness_node_data_dir sile16/graphene-witness:aug19test --genesis-json /aug-19-puppies-test-genesis.json -s 45.55.6.216:1776 --rpc-endpoint 127.0.0.1:8090
the docker image is pushed so these commands should work for anyone.
# declare an appender named "default" that writes messages to default.log
[log.file_appender.default]
filename=logs/default/default.log
# filename can be absolute or relative to this config file
# route any messages logged to the default logger to the "stderr" logger
# and "default" logger we
# declared above, if they are info level are higher
[logger.default]
level=info
appenders=stderr,default
{
"head_block_num": 40902,
"head_block_id": "00009fc6d4e24db66e13af461b94063165999bfa",
"head_block_age": "4 minutes old",
"next_maintenance_time": "31 seconds ago",
"chain_id": "5508f5f743717fe2c78445364f62a72badd7532974d26f089af2062228b532eb",
"active_witnesses": [
"1.6.0",
"1.6.1",
"1.6.2",
"1.6.1525",
"1.6.1536",
"1.6.4435",
"1.6.5245",
"1.6.5246",
"1.6.5247",
"1.6.5248",
"1.6.5249"
],
"active_committee_members": [
"1.5.0",
"1.5.1",
"1.5.2",
"1.5.3",
"1.5.4",
"1.5.5",
"1.5.6",
"1.5.7",
"1.5.8",
"1.5.9"
],
"entropy": "8927f3f36c4599a6a7af5a3cdfb70a8f57d463ae"
}
2015-08-20T07:11:59 th_a:invoke handle_block handle_block ] Got block #40902 from network application.cp
p:343
2015-08-20T07:12:00 th_a:Witness Block Production block_production_loo ] slot: 1 scheduled_witness: 1.6.4435 scheduled_time: 2015-08-2
0T07:12:00 now: 2015-08-20T07:12:00 witness.cpp:240
2015-08-20T07:12:01 th_a:Witness Block Production block_production_loo ] slot: 2 scheduled_witness: 1.6.5245 scheduled_time: 2015-08-2
0T07:12:01 now: 2015-08-20T07:12:01 witness.cpp:240
2015-08-20T07:12:02 th_a:Witness Block Production block_production_loo ] slot: 3 scheduled_witness: 1.6.1 scheduled_time: 2015-08-20T0
7:12:02 now: 2015-08-20T07:12:02 witness.cpp:240
2015-08-20T07:12:03 th_a:Witness Block Production block_production_loo ] slot: 4 scheduled_witness: 1.6.5247 scheduled_time: 2015-08-2
0T07:12:03 now: 2015-08-20T07:12:03 witness.cpp:240
2015-08-20T07:12:03 th_a:Witness Block Production block_production_loo ] Witness 1.6.5247 production slot has arrived; generating a bl
ock now... witness.cpp:243
2015-08-20T07:12:03 th_a:Witness Block Production _push_block ] Failed to push new block:
3070000 undo_database_exception: undo database exception
The database does not have enough undo history to support a blockchain with so many missed blocks. Please add a checkpoint if you would like to continue applying blocks beyond this point.
{"recently_missed":1006,"max_undo":1000}
th_a db_update.cpp:72 update_global_dynamic_data
{"next_block.block_num()":40903}
th_a db_block.cpp:439 _apply_block db_block.cpp:170
I'm stuck at 40902 now. Looks like the testnet is dead again?Code: [Select]{
"head_block_num": 40902,
"head_block_id": "00009fc6d4e24db66e13af461b94063165999bfa",
"head_block_age": "4 minutes old",
"next_maintenance_time": "31 seconds ago",
"chain_id": "5508f5f743717fe2c78445364f62a72badd7532974d26f089af2062228b532eb",
"active_witnesses": [
"1.6.0",
"1.6.1",
"1.6.2",
"1.6.1525",
"1.6.1536",
"1.6.4435",
"1.6.5245",
"1.6.5246",
"1.6.5247",
"1.6.5248",
"1.6.5249"
],
"active_committee_members": [
"1.5.0",
"1.5.1",
"1.5.2",
"1.5.3",
"1.5.4",
"1.5.5",
"1.5.6",
"1.5.7",
"1.5.8",
"1.5.9"
],
"entropy": "8927f3f36c4599a6a7af5a3cdfb70a8f57d463ae"
}Code: [Select]2015-08-20T07:11:59 th_a:invoke handle_block handle_block ] Got block #40902 from network application.cp
p:343
2015-08-20T07:12:00 th_a:Witness Block Production block_production_loo ] slot: 1 scheduled_witness: 1.6.4435 scheduled_time: 2015-08-2
0T07:12:00 now: 2015-08-20T07:12:00 witness.cpp:240
2015-08-20T07:12:01 th_a:Witness Block Production block_production_loo ] slot: 2 scheduled_witness: 1.6.5245 scheduled_time: 2015-08-2
0T07:12:01 now: 2015-08-20T07:12:01 witness.cpp:240
2015-08-20T07:12:02 th_a:Witness Block Production block_production_loo ] slot: 3 scheduled_witness: 1.6.1 scheduled_time: 2015-08-20T0
7:12:02 now: 2015-08-20T07:12:02 witness.cpp:240
2015-08-20T07:12:03 th_a:Witness Block Production block_production_loo ] slot: 4 scheduled_witness: 1.6.5247 scheduled_time: 2015-08-2
0T07:12:03 now: 2015-08-20T07:12:03 witness.cpp:240
2015-08-20T07:12:03 th_a:Witness Block Production block_production_loo ] Witness 1.6.5247 production slot has arrived; generating a bl
ock now... witness.cpp:243
2015-08-20T07:12:03 th_a:Witness Block Production _push_block ] Failed to push new block:
3070000 undo_database_exception: undo database exception
The database does not have enough undo history to support a blockchain with so many missed blocks. Please add a checkpoint if you would like to continue applying blocks beyond this point.
{"recently_missed":1006,"max_undo":1000}
th_a db_update.cpp:72 update_global_dynamic_data
{"next_block.block_num()":40903}
th_a db_block.cpp:439 _apply_block db_block.cpp:170
Happy forking :D
After forked, if restart without --resync-blockchain, then my node won't sync.
If restart with --resync-blockchain, it won't produce block after in sync.
So it's best remove the data directory before start.
I'm online again now.
For those eagerly waiting .. here is another howto
https://github.com/cryptonomex/graphene/wiki/Howto-import-an-existing-delegate-as-witness-in-BitShares-2.0
--witness-id '"<witnessid>"'
Good idea. I am going to check how to set checkpoints. If you or anybody know please post here, thanks!I'm stuck at 40902 now. Looks like the testnet is dead again?
Yes, same for me, I was late for the party. If you already are a block producer maybe you can retsart your witness and try checkpointing option just for fun? Maybe you can get it back to life?
Option's obviously it's already there:
https://github.com/cryptonomex/graphene/blob/9c0c588ed62165886a510775d76c8524c50c09c5/libraries/app/application.cpp
Regards
Good idea. I am going to check how to set checkpoints. If you or anybody know please post here, thanks!I'm stuck at 40902 now. Looks like the testnet is dead again?
Yes, same for me, I was late for the party. If you already are a block producer maybe you can retsart your witness and try checkpointing option just for fun? Maybe you can get it back to life?
Option's obviously it's already there:
https://github.com/cryptonomex/graphene/blob/9c0c588ed62165886a510775d76c8524c50c09c5/libraries/app/application.cpp
Regards
--checkpoint '[40902,"00009fc6d4e24db66e13af461b94063165999bfa"]' --enable-stale-production
1122000ms th_a witness.cpp:240 block_production_loo ] slot: 11203 scheduled_witness: 1.6.5247 scheduled_[967/1963]
-08-20T10:18:42 now: 2015-08-20T10:18:42
1122000ms th_a witness.cpp:243 block_production_loo ] Witness 1.6.5247 production slot has arrived; generating a b
lock now...
1122001ms th_a db_block.cpp:170 _push_block ] Failed to push new block:
3070000 undo_database_exception: undo database exception
The database does not have enough undo history to support a blockchain with so many missed blocks. Please add a checkpoint if you woul
d like to continue applying blocks beyond this point.
{"recently_missed":44813,"max_undo":1000}
th_a db_update.cpp:72 update_global_dynamic_data
{"next_block.block_num()":40903}
th_a db_block.cpp:439 _apply_block
1122001ms th_a witness.cpp:266 block_production_loo ] Got exception while generating block:
3070000 undo_database_exception: undo database exception
The database does not have enough undo history to support a blockchain with so many missed blocks. Please add a checkpoint if you woul
d like to continue applying blocks beyond this point.
{"recently_missed":44813,"max_undo":1000}
th_a db_update.cpp:72 update_global_dynamic_data
{"next_block.block_num()":40903}
th_a db_block.cpp:439 _apply_block
{"new_block":{"previous":"00009fc6d4e24db66e13af461b94063165999bfa","timestamp":"2015-08-20T10:18:42","witness":"1.6.5247","next_s
ecret_hash":"a151bcbe08eac49feb4eb86a5a68f332b4e7cf9d","previous_secret":"2fa1e05db52f0c421e335648c21bb3707383c1d6","transaction_merkl
e_root":"0000000000000000000000000000000000000000","extensions":[],"witness_signature":"206ea3856d3cf6317f19ab6b5d3af6648409ac092eae9c
c8d943d7357e3ac9c753141733131d2312c4644d44125835afce24190d295c21a6a620920ab660ec5f5c","transactions":[]}}
th_a db_block.cpp:176 _push_block
{"witness_id":"1.6.5247"}
th_a db_block.cpp:315 _generate_block
It's as easy as adding this to the witness_node command:Code: [Select]--checkpoint '[40902,"00009fc6d4e24db66e13af461b94063165999bfa"]' --enable-stale-production
But it seems too late to rescue the network, I still get this error:Code: [Select]1122000ms th_a witness.cpp:240 block_production_loo ] slot: 11203 scheduled_witness: 1.6.5247 scheduled_[967/1963]
-08-20T10:18:42 now: 2015-08-20T10:18:42
1122000ms th_a witness.cpp:243 block_production_loo ] Witness 1.6.5247 production slot has arrived; generating a b
lock now...
1122001ms th_a db_block.cpp:170 _push_block ] Failed to push new block:
3070000 undo_database_exception: undo database exception
The database does not have enough undo history to support a blockchain with so many missed blocks. Please add a checkpoint if you woul
d like to continue applying blocks beyond this point.
{"recently_missed":44813,"max_undo":1000}
th_a db_update.cpp:72 update_global_dynamic_data
{"next_block.block_num()":40903}
th_a db_block.cpp:439 _apply_block
1122001ms th_a witness.cpp:266 block_production_loo ] Got exception while generating block:
3070000 undo_database_exception: undo database exception
The database does not have enough undo history to support a blockchain with so many missed blocks. Please add a checkpoint if you woul
d like to continue applying blocks beyond this point.
{"recently_missed":44813,"max_undo":1000}
th_a db_update.cpp:72 update_global_dynamic_data
{"next_block.block_num()":40903}
th_a db_block.cpp:439 _apply_block
{"new_block":{"previous":"00009fc6d4e24db66e13af461b94063165999bfa","timestamp":"2015-08-20T10:18:42","witness":"1.6.5247","next_s
ecret_hash":"a151bcbe08eac49feb4eb86a5a68f332b4e7cf9d","previous_secret":"2fa1e05db52f0c421e335648c21bb3707383c1d6","transaction_merkl
e_root":"0000000000000000000000000000000000000000","extensions":[],"witness_signature":"206ea3856d3cf6317f19ab6b5d3af6648409ac092eae9c
c8d943d7357e3ac9c753141733131d2312c4644d44125835afce24190d295c21a6a620920ab660ec5f5c","transactions":[]}}
th_a db_block.cpp:176 _push_block
{"witness_id":"1.6.5247"}
th_a db_block.cpp:315 _generate_block
...
{"recently_missed":47937,"max_undo":1000}
...
{"recently_missed":47981,"max_undo":1000}
...
{"recently_missed":48685,"max_undo":1000}
Did it get you going again? Seems to be still stuck or maybe you are on your own fork now? Which is your adress to use as seed-node?
new >>> info
info
{
"head_block_num": 40902,
"head_block_id": "00009fc6d4e24db66e13af461b94063165999bfa",
"head_block_age": "4 hours old",
"next_maintenance_time": "4 hours ago",
"chain_id": "5508f5f743717fe2c78445364f62a72badd7532974d26f089af2062228b532eb",
...
}
Did it get you going again? Seems to be still stuck or maybe you are on your own fork now? Which is your adress to use as seed-node?
I'm sure that I'm not on my own fork. Check this: http://45.55.6.216:8080/#/explorer/blocks or try 'info' command in cliwallet, my result is:Code: [Select]new >>> info
info
{
"head_block_num": 40902,
"head_block_id": "00009fc6d4e24db66e13af461b94063165999bfa",
"head_block_age": "4 hours old",
"next_maintenance_time": "4 hours ago",
"chain_id": "5508f5f743717fe2c78445364f62a72badd7532974d26f089af2062228b532eb",
...
}
General question (since I could not find this elsewhere):I don't know if there is a download link for the blockchain.
Is there a genuine place from where I can download "an up-to-date" BitShares blockchain? Setting up a new client is easy, but syncing it takes long and sometimes does not work such that the whole process needs to be repeated again and again (my slow little pc, c847/8gb, often has problems here and it takes days to re-sync/re-download the chain)... As such, it would be awesome to have an official and always "up-to-date" download link for the BitShares blockchain.
That said, my question is if such a download link is already there (and I could not find it), or would it be a great addition to the BitShare's homepage?
EDIT: Also 8gb ram seem sometimes not enough for a re-sync (using v0.9.2)... is that true? Did some one else make similar experiences?
I think it should be useful if slot < 1000. But we've missed more than 10000 blocks already.Did it get you going again? Seems to be still stuck or maybe you are on your own fork now? Which is your adress to use as seed-node?
I'm sure that I'm not on my own fork. Check this: http://45.55.6.216:8080/#/explorer/blocks or try 'info' command in cliwallet, my result is:Code: [Select]new >>> info
info
{
"head_block_num": 40902,
"head_block_id": "00009fc6d4e24db66e13af461b94063165999bfa",
"head_block_age": "4 hours old",
"next_maintenance_time": "4 hours ago",
"chain_id": "5508f5f743717fe2c78445364f62a72badd7532974d26f089af2062228b532eb",
...
}
OK, I see. So starting the witness-node with Checkpoint didn't make a difference then?
1117002ms th_a witness.cpp:240 block_production_loo ] slot: 14798 scheduled_witness: 1.6.1 scheduled_time: 2015-08
-20T11:18:37 now: 2015-08-20T11:18:37
1118002ms th_a witness.cpp:240 block_production_loo ] slot: 14799 scheduled_witness: 1.6.5248 scheduled_time: 2015
-08-20T11:18:38 now: 2015-08-20T11:18:38
1119002ms th_a witness.cpp:240 block_production_loo ] slot: 14800 scheduled_witness: 1.6.5247 scheduled_time: 2015-08-20T11:18:39 now: 2015-08-20T11:18:39
1119002ms th_a witness.cpp:243 block_production_loo ] Witness 1.6.5247 production slot has arrived; generating a block now...
1119004ms th_a db_block.cpp:170 _push_block ] Failed to push new block:
3070000 undo_database_exception: undo database exception
1117002ms th_a witness.cpp:240 block_production_loo ] slot: 14798 scheduled_witness: 1.6.1 scheduled_time: 2015-08
-20T11:18:37 now: 2015-08-20T11:18:37
1118002ms th_a witness.cpp:240 block_production_loo ] slot: 14799 scheduled_witness: 1.6.5248 scheduled_time: 2015
-08-20T11:18:38 now: 2015-08-20T11:18:38
1119002ms th_a witness.cpp:240 block_production_loo ] slot: 14800 scheduled_witness: 1.6.5247 scheduled_time: 2015-08-20T11:18:39 now: 2015-08-20T11:18:39
1119002ms th_a witness.cpp:243 block_production_loo ] Witness 1.6.5247 production slot has arrived; generating a block now...
1119004ms th_a db_block.cpp:170 _push_block ] Failed to push new block:
3070000 undo_database_exception: undo database exception
By looking at the screen of witness_node, it's true that only the 11 witnesses are producing blocks.My node is online now. Witness id 1.6.5247.for some reason the info and get_global_properties screen dont show all witnesses. There may have been something I should have changed in the genesis. There are 101 delegate slots though.
But number of active witnesses is still 10?Code: [Select]{
"head_block_num": 11773,
"head_block_id": "00002dfdab35043defebc42a6ae9d5a96767fd97",
"head_block_age": "1 second old",
"next_maintenance_time": "82 seconds in the future",
"chain_id": "5508f5f743717fe2c78445364f62a72badd7532974d26f089af2062228b532eb",
"active_witnesses": [
"1.6.0",
"1.6.1",
"1.6.2",
"1.6.3",
"1.6.4",
"1.6.5",
"1.6.6",
"1.6.1525",
"1.6.4435",
"1.6.5246",
"1.6.5247"
],
"active_committee_members": [
"1.5.0",
"1.5.1",
"1.5.2",
"1.5.3",
"1.5.4",
"1.5.5",
"1.5.6",
"1.5.7",
"1.5.8",
"1.5.9"
],
"entropy": "73f47170212775453a20de79b10d927e9d3b761d"
}
//Edit: it's 11, not 10
Yeah. You're right.
Thats funny because I had 100 going earlier. I could tell because I set the genesis to 101, but set my server witness node to 100, and so 1.6.100 was missing blocks every single rotation.
Actually there is a trick to restarting block production after so much time has passed.
First add a the following checkpoint
[HEADNUM+1, "00000000.....00"]
With that checkpoint you will be able to produce the next block. Once you have produced the block, other nodes can add a checkpoint with that freshly produced block and you will be up and running again.
Every time a checkpoint is reached it resets the required undo history to 0. Adding a checkpoint at HEADNUM will reset it to 0, but the next block you produce will have missed over 1000 blocks so is immediately beyond the reach. Therefore, we need to "checkpoint" the "next block" for which we do not know the ID yet.
So the real question is, why were we missing so many blocks? Who got voted in and why did you stop producing?
./witness_node ...... --checkpoint '[40903,"00009fc7e518d750f653ba0ca25a3067bd304f30"]'
[/s]BM please check my comment on Github. https://github.com/cryptonomex/graphene/issues/247#issuecomment-132349244
So the real question is, why were we missing so many blocks? Who got voted in and why did you stop producing?
./witness_node -s 114.92.254.159:62015 --genesis-json aug-19-puppies-test-genesis.json
Actually there is a trick to restarting block production after so much time has passed.Witnesses please add this so we can go on:
First add a the following checkpoint
[HEADNUM+1, "00000000.....00"]
With that checkpoint you will be able to produce the next block. Once you have produced the block, other nodes can add a checkpoint with that freshly produced block and you will be up and running again.
Every time a checkpoint is reached it resets the required undo history to 0. Adding a checkpoint at HEADNUM will reset it to 0, but the next block you produce will have missed over 1000 blocks so is immediately beyond the reach. Therefore, we need to "checkpoint" the "next block" for which we do not know the ID yet.
So the real question is, why were we missing so many blocks? Who got voted in and why did you stop producing?Code: [Select]--checkpoint '[40903,"00009fc7e518d750f653ba0ca25a3067bd304f30"]'
//Update: 1000 seconds have passed so quickly. Rescue failed. Have to wait until enough witnesses come online.
619838ms th_a db_block.cpp:170 _push_block ] Failed to push new
block:
10 assert_exception: Assert Exception
next_block.id() == itr->second: Block did not match checkpoint
{"checkpoint":[40903,"00009fc7e518d750f653ba0ca25a3067bd304f30"],"block_id":"00009fc7481
4ff846dcfa61cd7a48913d775d971"}
th_a db_block.cpp:367 apply_block
Segmentation fault (core dumped)
530999ms th_a witness.cpp:240 block_production_loo ] slot: 21412 scheduled_witness: 1.6.5249 scheduled_time: 2015-08-20T13:08:51 now: 2015-08-20T13:08:51
530999ms th_a witness.cpp:211 operator() ] Not producing block because production is disabled until we receive a recent block (see: --enable-stale-production)
531999ms th_a witness.cpp:240 block_production_loo ] slot: 21413 scheduled_witness: 1.6.5246 scheduled_time: 2015-08-20T13:08:52 now: 2015-08-20T13:08:52
532999ms th_a witness.cpp:240 block_production_loo ] slot: 21414 scheduled_witness: 1.6.0 scheduled_time: 2015-08-20T13:08:53 now: 2015-08-20T13:08:53
533999ms th_a witness.cpp:240 block_production_loo ] slot: 21415 scheduled_witness: 1.6.1536 scheduled_time: 2015-08-20T13:08:54 now: 2015-08-20T13:08:54
534999ms th_a witness.cpp:240 block_production_loo ] slot: 21416 scheduled_witness: 1.6.5245 scheduled_time: 2015-08-20T13:08:55 now: 2015-08-20T13:08:55
535999ms th_a witness.cpp:240 block_production_loo ] slot: 21417 scheduled_witness: 1.6.1525 scheduled_time: 2015-08-20T13:08:56 now: 2015-08-20T13:08:56
536999ms th_a witness.cpp:240 block_production_loo ] slot: 21418 scheduled_witness: 1.6.1 scheduled_time: 2015-08-20T13:08:57 now: 2015-08-20T13:08:57
537604ms th_a application.cpp:487 get_blockchain_synop ] reference_point: 0000000000000000000000000000000000000000 number_of_blocks_after_reference_point: 0 result: ["00001fc61a621678e92fc941b58b08d13afa2268","00005fc6559ab7b6f2b0035b0accec1ca7bb9110","00007fc615d686c893fe81c7b3790565bd3ccac3","00008fc6ef374fa2e6b8a03a490dfc6f274d1b8c","000097c68746555744ecea0a5753eabbb540cb4b","00009bc6d4d50fefd8cca7e61cf243a1820d8add","00009dc666f08e568fddfe6ef99968363b649c16","00009ec6b4958ad1910cfd3d3d5451634b8c948a","00009f46f5a6a32bbe673fb3990ae69d80d29779","00009f863b8da93c16cd47c9a591d7a9071e92e1","00009fa6414eb1adf2e7a4e0ec81fa58bb2f0802","00009fb612985adcce921f4e1e47326e45b088dd","00009fbe0bbfc979683d479ba4abd6242dd0b7b0","00009fc2c4081a76afbad31337249726a2d06515","00009fc4a66df9d134e85476353bae7af2715909","00009fc50fe983dde2e975f6aed7d69f4f2cd952","00009fc6d4e24db66e13af461b94063165999bfa"]
537999ms th_a witness.cpp:240 block_production_loo ] slot: 21419 scheduled_witness: 1.6.4435 scheduled_time: 2015-08-20T13:08:58 now: 2015-08-20T13:08:58
538115ms th_a application.cpp:487 get_blockchain_synop ] reference_point: 00009cb4e54f640a0cef1aeab7aab0dfc955d333 number_of_blocks_after_reference_point: 0 result: ["00001fc61a621678e92fc941b58b08d13afa2268","00005fc6559ab7b6f2b0035b0accec1ca7bb9110","00007fc615d686c893fe81c7b3790565bd3ccac3","00008fc6ef374fa2e6b8a03a490dfc6f274d1b8c","000097c68746555744ecea0a5753eabbb540cb4b","00009bc6d4d50fefd8cca7e61cf243a1820d8add","00009dc666f08e568fddfe6ef99968363b649c16","00009ec6b4958ad1910cfd3d3d5451634b8c948a","00009f46f5a6a32bbe673fb3990ae69d80d29779","00009f863b8da93c16cd47c9a591d7a9071e92e1","00009fa6414eb1adf2e7a4e0ec81fa58bb2f0802","00009fb612985adcce921f4e1e47326e45b088dd","00009fbe0bbfc979683d479ba4abd6242dd0b7b0","00009fc2c4081a76afbad31337249726a2d06515","00009fc4a66df9d134e85476353bae7af2715909","00009fc50fe983dde2e975f6aed7d69f4f2cd952","00009fc6d4e24db66e13af461b94063165999bfa"]
538276ms th_a application.cpp:438 get_item ] Request for item {"item_type":1001,"item_hash":"00009cb5f3657547f191efd666aa7aaca57b3a8f"}
538276ms th_a application.cpp:446 get_item ] Serving up block #40117
538276ms th_a application.cpp:438 get_item ] Request for item {"item_type":1001,"item_hash":"00009cb5f3657547f191efd666aa7aaca57b3a8f"}
538276ms th_a application.cpp:446 get_item ] Serving up block #40117
538467ms th_a application.cpp:487 get_blockchain_synop ] reference_point: 00009cb4e54f640a0cef1aeab7aab0dfc955d333 number_of_blocks_after_reference_point: 0 result: ["00001fc61a621678e92fc941b58b08d13afa2268","00005fc6559ab7b6f2b0035b0accec1ca7bb9110","00007fc615d686c893fe81c7b3790565bd3ccac3","00008fc6ef374fa2e6b8a03a490dfc6f274d1b8c","000097c68746555744ecea0a5753eabbb540cb4b","00009bc6d4d50fefd8cca7e61cf243a1820d8add","00009dc666f08e568fddfe6ef99968363b649c16","00009ec6b4958ad1910cfd3d3d5451634b8c948a","00009f46f5a6a32bbe673fb3990ae69d80d29779","00009f863b8da93c16cd47c9a591d7a9071e92e1","00009fa6414eb1adf2e7a4e0ec81fa58bb2f0802","00009fb612985adcce921f4e1e47326e45b088dd","00009fbe0bbfc979683d479ba4abd6242dd0b7b0","00009fc2c4081a76afbad31337249726a2d06515","00009fc4a66df9d134e85476353bae7af2715909","00009fc50fe983dde2e975f6aed7d69f4f2cd952","00009fc6d4e24db66e13af461b94063165999bfa"]
So the real question is, why were we missing so many blocks? Who got voted in and why did you stop producing?
BM please check my comment on Github. https://github.com/cryptonomex/graphene/issues/247#issuecomment-132349244
So the real question is, why were we missing so many blocks? Who got voted in and why did you stop producing?
Once a witness started forking, it's hard to switch back.
Failed to push new blocks. What is exact command?Yes it's too late, and I'm re-syncing.
2015-08-20T15:55:57 p2p:message read_loop on_closing_connectio ] Peer 46.101.138.170:59332 is disconnecting us because of an error: You offered us a block that we reject as invalid, exception: {"code":10,"name":"assert_exception","message":"Assert Exception","stack":[{"context":
{"level":"error","file":"fork_database.cpp","line":67,"method":"_push_block","hostname":"","thread_name":"th_a","timestamp":"2015-08-20T15:55:56"},"format":"item->num > std::max<int64_t>( 0, int64_t(_head->num) - (_max_size) ): ","data":{}},{"context":{"level":"warn","file":"db_block.cpp","line":176,"method":"_push_block","hostname":"","thread_name":"th_a","timestamp":"2015-08-20T15:55:56"},"format":"","data":{"new_block":{"previous":"0000000000000000000000000000000000000000","timestamp":"2015-08-20T13:02:51","witness":"1.6.71","next_secret_hash":"86b9b6883182c63a63bbcd76fedb0a36d5e569ca","previous_secret":"0000000000000000000000000000000000000000","transaction_merkle_root":"0000000000000000000000000000000000000000","extensions":
[],"witness_signature":"203037bdf94500a31b9d890ab55c46f8010f2b4ce8f1e385da149d5529a48b694866730aa203f63f5d75a5fd915cc3925daaa4d37b258a6c67a9fb12b486c2637f","transactions":[]}}},{"context":{"level":"warn","file":"application.cpp","line":379,"method":"handle_block","hostname":"","thread_name":"th_a","timestamp":"2015-08-20T15:55:56"},"format":"","data":{"blk_msg":{"block":{"previous":"0000000000000000000000000000000000000000","timestamp":"2015-08-20T13:02:51","witness":"1.6.71","next_secret_hash":"86b9b6883182c63a63bbcd76fedb0a36d5e569ca","previous_secret":"0000000000000000000000000000000000000000","transaction_merkle_root":"0000000000000000000000000000000000000000","extensions":[],"witness_signature":"203037bdf94500a31b9d890ab55c46f8010f2b4ce8f1e385da149d5529a48b694866730aa203f63f5d75a5fd915cc3925daaa4d37b258a6c67a9fb12b486c2637f","transactions":[]},"block_id":"00000001e6f900ccb1fe3acf156074820b2e708e"},"sync_mode":true}}]}
For those who have interest, I started another chain. Chain id is still "5508f5f743717fe2c78445364f62a72badd7532974d26f089af2062228b532eb"Code: [Select]./witness_node -s 114.92.254.159:62015 --genesis-json aug-19-puppies-test-genesis.json
commit 8a9120e5179eed2640a8d0b926b7e61980bb42a2
Author: Daniel Larimer <dan@bitshares.org>
Date: Thu Aug 20 08:47:38 2015 -0400
increasing the default minimum number of witnesses to 101 for testing
https://www.dropbox.com/s/xzlnoyn4tdpdede/aug-19-puppies-test-genesis.json?dl=0 (https://www.dropbox.com/s/xzlnoyn4tdpdede/aug-19-puppies-test-genesis.json?dl=0)For those who have interest, I started another chain. Chain id is still "5508f5f743717fe2c78445364f62a72badd7532974d26f089af2062228b532eb"Code: [Select]./witness_node -s 114.92.254.159:62015 --genesis-json aug-19-puppies-test-genesis.json
OK, finally have some answers, trying to catch up with everyone. Just reviewed several pages of this thread, but can't find the aug-19th genesis download URL. Are we all moving to abit's chain now? I'll add --checkpoint '[40903,"00009fc7e518d750f653ba0ca25a3067bd304f30"]' to the witness startup too.
Still need to get voted in, not sure if 0.9.2 balance migration is necessary at this point tho...
It looks like dan just pushed a commit that increases minimum witness to 101. We can run a network that won't die if 4 witnesses go down. I think we should all build again and start a new network. I am at work, but I can start a seed node on my VPS. If someone else that is going to have more time, would like to start a server node instead let me know.Code: [Select]commit 8a9120e5179eed2640a8d0b926b7e61980bb42a2
Author: Daniel Larimer <dan@bitshares.org>
Date: Thu Aug 20 08:47:38 2015 -0400
increasing the default minimum number of witnesses to 101 for testing
Although looks like 714261b02a55002f05ea647aa29ab3900b490407 is even newer, so thats what I am planning on building[/code]
git clone https://github.com/cryptonomex/graphene.git
cd graphene
git submodule update --init --recursive
cmake .
make
yesIt looks like dan just pushed a commit that increases minimum witness to 101. We can run a network that won't die if 4 witnesses go down. I think we should all build again and start a new network. I am at work, but I can start a seed node on my VPS. If someone else that is going to have more time, would like to start a server node instead let me know.Code: [Select]commit 8a9120e5179eed2640a8d0b926b7e61980bb42a2
Author: Daniel Larimer <dan@bitshares.org>
Date: Thu Aug 20 08:47:38 2015 -0400
increasing the default minimum number of witnesses to 101 for testing
Although looks like 714261b02a55002f05ea647aa29ab3900b490407 is even newer, so thats what I am planning on building[/code]
So you're building 714261b02a55002f05ea647aa29ab3900b490407 ?
./witness_node --resync-blockchain -d test_net --checkpoint '[40902,"00009fc6d4e24db66e13af461b94063165999bfa"]' --enable-stale-production
#Endpoint for P2P node to listen on
p2p-endpoint = 0.0.0.0:1776
# P2P nodes to connect to on startup (may specify multiple times)
seed-node = 45.55.6.216:1776
#seed-node = 114.92.254.159:62015
# Pairs of [BLOCK_NUM,BLOCK_ID] that should be enforced as checkpoints.
# checkpoint =
# Endpoint for websocket RPC to listen on
rpc-endpoint = 127.0.0.1:8090
# Endpoint for TLS websocket RPC to listen on
# rpc-tls-endpoint =
# The TLS certificate file for this server
# server-pem =
# Password for this certificate
# server-pem-password =
# File to read Genesis State from
#genesis-json = aug-14-test-genesis.json
genesis-json = aug-19-puppies-test-genesis.json
# JSON file specifying API permissions
# api-access =
# Enable block production, even if the chain is stale.
enable-stale-production = false
# Percent of witnesses (0-99) that must be participating in order to produce blocks
required-participation = false
# Allow block production, even if the last block was produced by the same witness.
allow-consecutive = false
# ID of witness controlled by this node (e.g. "1.6.0", quotes are required, may specify multiple times)
witness-id = "1.6.1530"
# Tuple of [PublicKey, WIF private key] (may specify multiple times)
private-key = ["GPH52ms1dYJko2v5vS3rCdVLzQBogjeDRc1CpkaZ4seC4J4H7Uc71","<private signing key here>"]
# Account ID to track history for (may specify multiple times)
# track-account =
# Track market history by grouping orders into buckets of equal size measured in seconds specified as a JSON array of numbers
bucket-size = [15,60,300,3600,86400]
# How far back in time to track history for each bucket size, measured in the number of buckets (default: 1000)
history-per-size = 1000
# declare an appender named "stderr" that writes messages to the console
[log.console_appender.stderr]
stream=std_error
# declare an appender named "p2p" that writes messages to p2p.log
[log.file_appender.p2p]
filename=logs/p2p/p2p.log
# filename can be absolute or relative to this config file
# route any messages logged to the default logger to the "stderr" logger we
# declared above, if they are info level are higher
[logger.default]
level=info
appenders=stderr
# route messages sent to the "p2p" logger to the p2p appender declared above
[logger.p2p]
level=debug
appenders=p2p
unlocked >>> vote_for_witness delegate.verbaltech delegate.verbaltech true true
vote_for_witness delegate.verbaltech delegate.verbaltech true true
893478ms th_a wallet.cpp:1590 sign_transaction ] Caught exception while broadcasting transaction with id e29704e03ea31a41fa9c28eb43841e2d55fc080f
0 exception: unspecified
3030001 tx_missing_active_auth: missing required active authority
Missing Active Authority 1.2.22397
Okay. 45.55.6.216:1776 is up built on commit 714261b02a55002f05ea647aa29ab3900b490407
Okay. 45.55.6.216:1776 is up built on commit 714261b02a55002f05ea647aa29ab3900b490407
Can you post the chain id?
Okay. 45.55.6.216:1776 is up built on commit 714261b02a55002f05ea647aa29ab3900b490407
Can you post the chain id?
{"remote_chain_id":"b8efd614e51a54d96ad6039c1210774296bf529e1c02ccbf4beb4605a2dc1ab8"
2810693ms th_a witness.cpp:266 block_production_loo ] Got exception while generating block:
10 assert_exception: Assert Exception
witness_obj.signing_key == block_signing_private_key.get_public_key():
{}
th_a db_block.cpp:266 _generate_block
{"witness_id":"1.6.5245"}
th_a db_block.cpp:315 _generate_block
Bytemaster is the minimum witness count hard coded or in the genesis?I think what he means is: if we vote in more witness (maybe include init witness) then the number of active witnesses will automatically increase.
Yeah. I just wasnt sure if there was something I had missed in the genesis, or if the minimum was coded into the client.Looks like we need a new genesis file.
This was answered when bm changed it in the client.
commit 2f0065d593dfd06deda970f9a8df59a930c50bff
Author: Daniel Larimer <dan@bitshares.org>
Date: Thu Aug 20 13:23:25 2015 -0400
genesis file specifies 101 min
awesome. Thanks.Yeah. I just wasnt sure if there was something I had missed in the genesis, or if the minimum was coded into the client.Looks like we need a new genesis file.
This was answered when bm changed it in the client.Code: [Select]commit 2f0065d593dfd06deda970f9a8df59a930c50bff
Author: Daniel Larimer <dan@bitshares.org>
Date: Thu Aug 20 13:23:25 2015 -0400
genesis file specifies 101 min
//Edit: ah, found it on the release/tag page.
Let's join to the official testnetI'm in. Witness id 1.6.5247.
https://github.com/cryptonomex/graphene/releases/tag/test1
Let's join to the official testnetI'm in. Witness id 1.6.5247.
https://github.com/cryptonomex/graphene/releases/tag/test1
956998ms th_a witness.cpp:240 block_production_loo ] slot: 71 scheduled_
witness: 1.6.44 scheduled_time: 2015-08-20T23:15:57 now: 2015-08-20T23:15:57
957313ms th_a application.cpp:348 handle_block ] Got block #18559 fr
om network
957313ms th_a application.cpp:370 handle_block ] Error when pushing
block:
10 assert_exception: Assert Exception
second_branch_itr != _index.get<block_id>().end():
{}
th_a fork_database.cpp:186 fetch_branch_from
{"first":"0000487f7a7025de308d52dc7b4967bb1ee8e0ac","second":"0000483cecce88706ed008ec83
18fab4816954b2"}
th_a fork_database.cpp:217 fetch_branch_from
{"new_block":{"previous":"0000487e8a4ecb97cccf50052ca6ccfe984e5082","timestamp":"2015-08
-20T23:15:57","witness":"1.6.73","next_secret_hash":"b0d5a6c59704871873528d2499ba024825b2d0d
b","previous_secret":"9cd95d648f739607207427ba9f1a3ed4186286be","transaction_merkle_root":"0
000000000000000000000000000000000000000","extensions":[],"witness_signature":"204942e6396e76
4f268a3db0cb17d009d729315fa3ca88501ce01bd8b2e4b1258b526ce28a440e7b28e42ffdf3a329902677f3a277
57b3bd8d47c80d5d634a66f8","transactions":[]}}
th_a db_block.cpp:176 _push_block
957497ms th_a application.cpp:348 handle_block ] Got block #18559 fr
om network
957498ms th_a application.cpp:370 handle_block ] Error when pushing
block:
10 assert_exception: Assert Exception
second_branch_itr != _index.get<block_id>().end():
{}
th_a fork_database.cpp:186 fetch_branch_from
{"first":"0000487f7a7025de308d52dc7b4967bb1ee8e0ac","second":"0000483cecce88706ed008ec83
18fab4816954b2"}
th_a fork_database.cpp:217 fetch_branch_from
{"new_block":{"previous":"0000487e8a4ecb97cccf50052ca6ccfe984e5082","timestamp":"2015-08
-20T23:15:57","witness":"1.6.73","next_secret_hash":"b0d5a6c59704871873528d2499ba024825b2d0d
b","previous_secret":"9cd95d648f739607207427ba9f1a3ed4186286be","transaction_merkle_root":"0
000000000000000000000000000000000000000","extensions":[],"witness_signature":"204942e6396e76
4f268a3db0cb17d009d729315fa3ca88501ce01bd8b2e4b1258b526ce28a440e7b28e42ffdf3a329902677f3a277
57b3bd8d47c80d5d634a66f8","transactions":[]}}
th_a db_block.cpp:176 _push_block
957998ms th_a witness.cpp:240 block_production_loo ] slot: 72 scheduled_
witness: 1.6.1537 scheduled_time: 2015-08-20T23:15:58 now: 2015-08-20T23:15:58
957998ms th_a witness.cpp:243 block_production_loo ] Witness 1.6.1537 pr
oduction slot has arrived; generating a block now...
Segmentation fault (core dumped)
https://graphene.bitshares.org is now up to date with the latest test network.Good!
We are working on a bug with voting by proxy... don't enter text into that field at this time it will lockup your browser window.
I have been able to import everything from my BTS wallet successfully. Still a lot of work to do on the GUI, but it is coming together at a consistent rate.
Failed to create wallet: TypeError: Invalid value undefined supplied to WalletTcomb/chain_id: String
It seems dead...I'm forking too.. will post logs here later. Have you restarted?Code: [Select]956998ms th_a witness.cpp:240 block_production_loo ] slot: 71 scheduled_
witness: 1.6.44 scheduled_time: 2015-08-20T23:15:57 now: 2015-08-20T23:15:57
957313ms th_a application.cpp:348 handle_block ] Got block #18559 fr
om network
957313ms th_a application.cpp:370 handle_block ] Error when pushing
block:
10 assert_exception: Assert Exception
second_branch_itr != _index.get<block_id>().end():
{}
th_a fork_database.cpp:186 fetch_branch_from
{"first":"0000487f7a7025de308d52dc7b4967bb1ee8e0ac","second":"0000483cecce88706ed008ec83
18fab4816954b2"}
th_a fork_database.cpp:217 fetch_branch_from
{"new_block":{"previous":"0000487e8a4ecb97cccf50052ca6ccfe984e5082","timestamp":"2015-08
-20T23:15:57","witness":"1.6.73","next_secret_hash":"b0d5a6c59704871873528d2499ba024825b2d0d
b","previous_secret":"9cd95d648f739607207427ba9f1a3ed4186286be","transaction_merkle_root":"0
000000000000000000000000000000000000000","extensions":[],"witness_signature":"204942e6396e76
4f268a3db0cb17d009d729315fa3ca88501ce01bd8b2e4b1258b526ce28a440e7b28e42ffdf3a329902677f3a277
57b3bd8d47c80d5d634a66f8","transactions":[]}}
th_a db_block.cpp:176 _push_block
957497ms th_a application.cpp:348 handle_block ] Got block #18559 fr
om network
957498ms th_a application.cpp:370 handle_block ] Error when pushing
block:
10 assert_exception: Assert Exception
second_branch_itr != _index.get<block_id>().end():
{}
th_a fork_database.cpp:186 fetch_branch_from
{"first":"0000487f7a7025de308d52dc7b4967bb1ee8e0ac","second":"0000483cecce88706ed008ec83
18fab4816954b2"}
th_a fork_database.cpp:217 fetch_branch_from
{"new_block":{"previous":"0000487e8a4ecb97cccf50052ca6ccfe984e5082","timestamp":"2015-08
-20T23:15:57","witness":"1.6.73","next_secret_hash":"b0d5a6c59704871873528d2499ba024825b2d0d
b","previous_secret":"9cd95d648f739607207427ba9f1a3ed4186286be","transaction_merkle_root":"0
000000000000000000000000000000000000000","extensions":[],"witness_signature":"204942e6396e76
4f268a3db0cb17d009d729315fa3ca88501ce01bd8b2e4b1258b526ce28a440e7b28e42ffdf3a329902677f3a277
57b3bd8d47c80d5d634a66f8","transactions":[]}}
th_a db_block.cpp:176 _push_block
957998ms th_a witness.cpp:240 block_production_loo ] slot: 72 scheduled_
witness: 1.6.1537 scheduled_time: 2015-08-20T23:15:58 now: 2015-08-20T23:15:58
957998ms th_a witness.cpp:243 block_production_loo ] Witness 1.6.1537 pr
oduction slot has arrived; generating a block now...
Segmentation fault (core dumped)
Forking ...Let's join to the official testnetI'm in. Witness id 1.6.5247.
https://github.com/cryptonomex/graphene/releases/tag/test1
You guys don't miss a thing!
2015-08-20T23:22:09 th_a:Witness Block Production block_production_loo ] slot: 1 scheduled_witness: 1.6.4 scheduled_time: 2015-08-20T2
3:22:09 now: 2015-08-20T23:22:09 witness.cpp:240
2015-08-20T23:22:09 th_a:invoke handle_block handle_block ] Got block #18921 from network application.cp
p:348
2015-08-20T23:22:10 th_a:Witness Block Production block_production_loo ] slot: 1 scheduled_witness: 1.6.28 scheduled_time: 2015-08-20T
23:22:10 now: 2015-08-20T23:22:10 witness.cpp:240
2015-08-20T23:22:11 th_a:Witness Block Production block_production_loo ] slot: 2 scheduled_witness: 1.6.5247 scheduled_time: 2015-08-2
0T23:22:11 now: 2015-08-20T23:22:11 witness.cpp:240
2015-08-20T23:22:11 th_a:Witness Block Production block_production_loo ] Witness 1.6.5247 production slot has arrived; generating a block now... witness.cpp:243
2015-08-20T23:22:11 th_a:Witness Block Production block_production_loo ] Generated block #18922 with timestamp 2015-08-20T23:22:11 at time 2015-08-20T23:22:11 witness.cpp:256
2015-08-20T23:22:11 th_a:invoke get_item get_item ] Request for item {"item_type":1001,"item_hash":"000049ea9c48e746def3d171e8bd9a6f7efef9c4"} application.cpp:443
2015-08-20T23:22:11 th_a:invoke get_item get_item ] Serving up block #18922 application.cpp:451
2015-08-20T23:22:11 th_a:invoke get_item get_item ] Request for item {"item_type":1001,"item_hash":"000049ea9c48e746def3d171e8bd9a6f7efef9c4"} application.cpp:443
2015-08-20T23:22:11 th_a:invoke get_item get_item ] Serving up block #18922 application.cpp:451
2015-08-20T23:22:11 th_a:invoke get_item get_item ] Request for item {"item_type":1001,"item_hash":"000049ea9c48e746def3d171e8bd9a6f7efef9c4"} application.cpp:443
2015-08-20T23:22:11 th_a:invoke get_item get_item ] Serving up block #18922 application.cpp:451
2015-08-20T23:22:11 th_a:invoke get_item get_item ] Request for item {"item_type":1001,"item_hash":"000049ea9c48e746def3d171e8bd9a6f7efef9c4"} application.cpp:443
2015-08-20T23:22:11 th_a:invoke get_item get_item ] Serving up block #18922 application.cpp:451
2015-08-20T23:22:11 th_a:invoke handle_block handle_block ] Got block #18922 from network application.cpp:348
2015-08-20T23:22:11 th_a:invoke get_item get_item ] Request for item {"item_type":1001,"item_hash":"000049ea9c48e746def3d171e8bd9a6f7efef9c4"} application.cpp:443
2015-08-20T23:22:11 th_a:invoke get_item get_item ] Serving up block #18922 application.cpp:451
2015-08-20T23:22:12 th_a:Witness Block Production block_production_loo ] slot: 1 scheduled_witness: 1.6.94 scheduled_time: 2015-08-20T23:22:12 now: 2015-08-20T23:22:12 witness.cpp:240
2015-08-20T23:22:12 th_a:invoke handle_block handle_block ] Got block #18923 from network application.cpp:348
2015-08-20T23:22:13 th_a:Witness Block Production block_production_loo ] slot: 1 scheduled_witness: 1.6.89 scheduled_time: 2015-08-20T23:22:13 now: 2015-08-20T23:22:13 witness.cpp:240
2015-08-20T23:22:13 th_a:invoke handle_block handle_block ] Got block #18924 from network application.cpp:348
2015-08-20T23:22:14 th_a:Witness Block Production block_production_loo ] slot: 1 scheduled_witness: 1.6.85 scheduled_time: 2015-08-20T23:22:14 now: 2015-08-20T23:22:14 witness.cpp:240
2015-08-20T23:22:14 th_a:invoke handle_block handle_block ] Got block #18925 from network application.cpp:348
2015-08-20T23:22:15 th_a:Witness Block Production block_production_loo ] slot: 1 scheduled_witness: 1.6.79 scheduled_time: 2015-08-20T23:22:15 now: 2015-08-20T23:22:15 witness.cpp:240
2015-08-20T23:22:15 th_a:invoke handle_transaction handle_transaction ] Got transaction from network application.cpp:383
2015-08-20T23:22:16 th_a:Witness Block Production block_production_loo ] slot: 2 scheduled_witness: 1.6.39 scheduled_time: 2015-08-20T23:22:16 now: 2015-08-20T23:22:16 witness.cpp:240
2015-08-20T23:22:16 th_a:invoke handle_block handle_block ] Got block #18927 from network application.cpp:348
2015-08-20T23:22:16 th_a:invoke handle_block push_block ] Pushing block to fork database that failed to link.
fork_database.cpp:57
2015-08-20T23:22:16 th_a:invoke handle_block handle_block ] Got block #18926 from network application.cpp:348
2015-08-20T23:22:16 th_a:invoke handle_block handle_block ] Error when pushing block:
6 key_not_found_exception: Key Not Found
Block 000049ee6bfb5462c87a7cc631983d36b2d5e40e not contained in block database
{"id":"000049ee6bfb5462c87a7cc631983d36b2d5e40e"}
th_a block_database.cpp:91 remove
{"id":"000049ee6bfb5462c87a7cc631983d36b2d5e40e"}
th_a block_database.cpp:102 remove
{}
th_a db_block.cpp:329 pop_block
{"new_block":{"previous":"000049ed587a0c63625259c90a08814b49dd48ad","timestamp":"2015-08-20T23:22:15","witness":"1.6.79","next_secret_hash":"be915a90267de27c859ea9b5bf2f81dd5a0075e0","previous_secret":"0286d358d2287db7ed6d1db7a3b26677cd5285b3","transaction_merkle_root":"0000000000000000000000000000000000000000","extensions":[],"witness_signature":"201d3dfc11a454871f2fe8b97e2bed38214e3796c8406561ca0190300a7ae6d67d6c17a904b6fbc7780450fc2fbf953d8be5a4bbd7e5604b6463a1ab00d6c05d12","transactions":[]}}
th_a db_block.cpp:176 _push_block application.cpp:370
2015-08-20T23:22:17 th_a:Witness Block Production block_production_loo ] slot: 3 scheduled_witness: 1.6.5246 scheduled_time: 2015-08-20T23:22:17 now: 2015-08-20T23:22:17 witness.cpp:240
2015-08-20T23:22:18 th_a:Witness Block Production block_production_loo ] slot: 4 scheduled_witness: 1.6.35 scheduled_time: 2015-08-20T23:22:18 now: 2015-08-20T23:22:18 witness.cpp:240
2015-08-20T23:22:18 th_a:invoke handle_block handle_block ] Got block #18928 from network application.cpp:348
2015-08-20T23:22:18 th_a:invoke handle_block push_block ] Pushing block to fork database that failed to link.
fork_database.cpp:57
2015-08-20T23:23:28 th_a:Witness Block Production block_production_loo ] slot: 74 scheduled_witness: 1.6.5247 scheduled_time: 2015-08-
20T23:23:28 now: 2015-08-20T23:23:28 witness.cpp:240
2015-08-20T23:23:28 th_a:Witness Block Production block_production_loo ] Witness 1.6.5247 production slot has arrived; generating a bl
ock now... witness.cpp:243
2015-08-20T23:23:28 th_a:Witness Block Production push_block ] Pushing block to fork database that failed to link.
fork_database.cpp:57
2015-08-20T23:23:28 th_a:Witness Block Production block_production_loo ] Generated block #18927 with timestamp 2015-08-20T23:23:28 at
time 2015-08-20T23:23:28 witness.cpp:256
2015-08-20T23:23:29 th_a:Witness Block Production block_production_loo ] slot: 75 scheduled_witness: 1.6.78 scheduled_time: 2015-08-20T23:23:29 now: 2015-08-20T23:23:29 witness.cpp:240
2015-08-20T23:23:29 th_a:invoke handle_block handle_block ] Got block #18998 from network application.cpp:348
2015-08-20T23:23:29 th_a:invoke handle_block handle_block ] Error when pushing block:
10 assert_exception: Assert Exception
item->num < _head->num + MAX_BLOCK_REORDERING:
{}
th_a fork_database.cpp:70 _push_block
{"new_block":{"previous":"00004a357deb88db9cd862e9f9915d69d352c526","timestamp":"2015-08-20T23:23:29","witness":"1.6.78","next_secret_hash":"89e22e27a13aef32fd3787b7c368d645dcf660ec","previous_secret":"61d9377a3602fb16f3ba640e83e2a7284a2da511","transaction_merkle_root":"0000000000000000000000000000000000000000","extensions":[],"witness_signature":"200fe0611f97331d08bcdae3c01a35e3ceeee11d84784cd0529b1bb9e9b448d35a38a7f0058419a51d3c2dbbebdfa2eeb537db1bcf39a377db6ba4ca41f7d26437","transactions":[]}}
th_a db_block.cpp:176 _push_block application.cpp:370
2015-08-20T23:25:23 th_a:Witness Block Production block_production_loo ] slot: 189 scheduled_witness: 1.6.5247 scheduled_time: 2015-08
-20T23:25:23 now: 2015-08-20T23:25:23 witness.cpp:240
2015-08-20T23:25:23 th_a:Witness Block Production block_production_loo ] Witness 1.6.5247 production slot has arrived; generating a bl
ock now... witness.cpp:243
2015-08-20T23:25:23 th_a:Witness Block Production push_block ] Pushing block to fork database that failed to link.
fork_database.cpp:57
2015-08-20T23:25:23 th_a:Witness Block Production block_production_loo ] Generated block #18927 with timestamp 2015-08-20T23:25:23 at
time 2015-08-20T23:25:23 witness.cpp:256
2015-08-20T23:25:23 th_a:invoke get_blockchain_synopsis get_blockchain_synop ] reference_point: 0000451829aa62b7545bae4ea469b750003bf4
1f number_of_blocks_after_reference_point: 0 result: ["000009ef501ed09bea4e32d71077b13bcd3c28a1","000029ef11981d056e1a70afd0fdac952774
b2f7","000039ef9f6d0c5d83c52767cd98c710ac7a2d46","000041efa457c0e455f2569bc98b2e097baf8ace","000045ef42cf7a74580c57dd77cf3be887575d14"
,"000047ef9621e0266c0a7a9195a57cbb0aef6550","000048efd72770d20925e6e74a83d7a93f213f0a","0000496fa7fcba2947cc187a5fa9e7e30d8b5f3a","000
049afce3550f6f3a26efa67968d465ac16268","000049cfbf02c4d519b393a93430ee4785076736","000049df2dad0b1b7eb473d9eb6d2e6a31cc3080","000049e7
c78642267a650db5c41ae256f9112e76","000049eb39306a1360497002faeea118732d6761","000049ed587a0c63625259c90a08814b49dd48ad","0000000000000
000000000000000000000000000","000049efc4646a99bad9d78607972719e1a2d79d"] application.cpp:492
2015-08-20T23:25:23 th_a:invoke get_blockchain_synopsis get_blockchain_synop ] reference_point: 000007d0fc03ea61aacb907a0fd791e5384fa2
2a number_of_blocks_after_reference_point: 0 result: ["000009ef501ed09bea4e32d71077b13bcd3c28a1","000029ef11981d056e1a70afd0fdac952774
b2f7","000039ef9f6d0c5d83c52767cd98c710ac7a2d46","000041efa457c0e455f2569bc98b2e097baf8ace","000045ef42cf7a74580c57dd77cf3be887575d14"
,"000047ef9621e0266c0a7a9195a57cbb0aef6550","000048efd72770d20925e6e74a83d7a93f213f0a","0000496fa7fcba2947cc187a5fa9e7e30d8b5f3a","000
049afce3550f6f3a26efa67968d465ac16268","000049cfbf02c4d519b393a93430ee4785076736","000049df2dad0b1b7eb473d9eb6d2e6a31cc3080","000049e7
c78642267a650db5c41ae256f9112e76","000049eb39306a1360497002faeea118732d6761","000049ed587a0c63625259c90a08814b49dd48ad","0000000000000000000000000000000000000000","000049efc4646a99bad9d78607972719e1a2d79d"] application.cpp:492
2015-08-20T23:25:24 th_a:Witness Block Production block_production_loo ] slot: 0 scheduled_witness: 1.6.0 scheduled_time: 1970-01-01T00:00:00 now: 2015-08-20T23:25:24 witness.cpp:240
2015-08-20T23:25:26 th_a:Witness Block Production block_production_loo ] slot: 0 scheduled_witness: 1.6.0 scheduled_time: 1970-01-01T0
0:00:00 now: 2015-08-20T23:25:26 witness.cpp:240
2015-08-20T23:25:27 th_a:Witness Block Production block_production_loo ] slot: 1 scheduled_witness: 1.6.0 scheduled_time: 2015-08-20T2
3:25:27 now: 2015-08-20T23:25:27 witness.cpp:240
2015-08-20T23:25:27 th_a:invoke get_blockchain_synopsis get_blockchain_synop ] reference_point: 000007d0fc03ea61aacb907a0fd791e5384fa2
2a number_of_blocks_after_reference_point: 0 result: ["000009ef501ed09bea4e32d71077b13bcd3c28a1","000029ef11981d056e1a70afd0fdac952774
b2f7","000039ef9f6d0c5d83c52767cd98c710ac7a2d46","000041efa457c0e455f2569bc98b2e097baf8ace","000045ef42cf7a74580c57dd77cf3be887575d14"
,"000047ef9621e0266c0a7a9195a57cbb0aef6550","000048efd72770d20925e6e74a83d7a93f213f0a","0000496fa7fcba2947cc187a5fa9e7e30d8b5f3a","000
049afce3550f6f3a26efa67968d465ac16268","000049cfbf02c4d519b393a93430ee4785076736","000049df2dad0b1b7eb473d9eb6d2e6a31cc3080","000049e7
c78642267a650db5c41ae256f9112e76","000049eb39306a1360497002faeea118732d6761","000049ed587a0c63625259c90a08814b49dd48ad","0000000000000
000000000000000000000000000","000049efc4646a99bad9d78607972719e1a2d79d"] application.cpp:492
...
2015-08-20T23:25:28 th_a:Witness Block Production block_production_loo ] slot: 2 scheduled_witness: 1.6.18 scheduled_time: 2015-08-20T23:25:28 now: 2015-08-20T23:25:28 witness.cpp:240
2015-08-20T23:26:40 th_a:Witness Block Production block_production_loo ] slot: 74 scheduled_witness: 1.6.20 scheduled_time: 2015-08-20T23:26:40 now: 2015-08-20T23:26:40 witness.cpp:240
2015-08-20T23:26:41 th_a:Witness Block Production block_production_loo ] slot: 75 scheduled_witness: 1.6.36 scheduled_time: 2015-08-20T23:26:41 now: 2015-08-20T23:26:41 witness.cpp:240
2015-08-20T23:26:42 th_a:Witness Block Production block_production_loo ] slot: 76 scheduled_witness: 1.6.5247 scheduled_time: 2015-08-20T23:26:42 now: 2015-08-20T23:26:42 witness.cpp:240
2015-08-20T23:26:42 th_a:Witness Block Production block_production_loo ] Witness 1.6.5247 production slot has arrived; generating a block now... witness.cpp:243
2015-08-20T23:26:42 th_a:Witness Block Production block_production_loo ] Got exception while generating block:
10 assert_exception: Assert Exception
_consecutive_production_enabled || db.get_dynamic_global_properties().current_witness != scheduled_witness: Last block was generated by the same witness, this node is probably disconnected from the network so block production has been disabled. Disable this check with --allow-consecutive option.
{}
th_a witness.cpp:248 block_production_loop witness.cpp:266
$ telnet 104.236.51.238 1776
Trying 104.236.51.238...
telnet: Unable to connect to remote host: Connection refused
info
{
"head_block_num": 21949,
"head_block_id": "000055bd60276952c37246d38eee3cf9780c71d6",
"head_block_age": "0 second old",
"next_maintenance_time": "18 seconds in the future",
"chain_id": "d011922587473757011118587f93afcc314fbaea094fc1055574721b27975083",
"active_witnesses": [
"1.6.0",
"1.6.1",
"1.6.2",
"1.6.3",
"1.6.4",
"1.6.5",
"1.6.6",
"1.6.7",
"1.6.8",
"1.6.9",
"1.6.10",
"1.6.11",
"1.6.12",
"1.6.13",
"1.6.14",
"1.6.15",
"1.6.16",
"1.6.17",
"1.6.18",
"1.6.19",
"1.6.20",
"1.6.21",
"1.6.22",
"1.6.23",
"1.6.24",
"1.6.25",
"1.6.26",
"1.6.27",
"1.6.28",
"1.6.29",
"1.6.30",
"1.6.31",
"1.6.32",
"1.6.33",
"1.6.34",
"1.6.35",
"1.6.36",
"1.6.37",
"1.6.38",
"1.6.39",
"1.6.40",
"1.6.41",
"1.6.42",
"1.6.43",
"1.6.44",
"1.6.45",
"1.6.46",
"1.6.47",
"1.6.48",
"1.6.49",
"1.6.50",
"1.6.51",
"1.6.52",
"1.6.53",
"1.6.54",
"1.6.55",
"1.6.56",
"1.6.57",
"1.6.58",
"1.6.59",
"1.6.60",
"1.6.61",
"1.6.62",
"1.6.63",
"1.6.64",
"1.6.65",
"1.6.66",
"1.6.67",
"1.6.68",
"1.6.69",
"1.6.70",
"1.6.71",
"1.6.72",
"1.6.73",
"1.6.74",
"1.6.75",
"1.6.76",
"1.6.77",
"1.6.78",
"1.6.79",
"1.6.80",
"1.6.81",
"1.6.82",
"1.6.83",
"1.6.84",
"1.6.85",
"1.6.86",
"1.6.87",
"1.6.88",
"1.6.89",
"1.6.90",
"1.6.91",
"1.6.92",
"1.6.93",
"1.6.94",
"1.6.95",
"1.6.96",
"1.6.1537",
"1.6.5246",
"1.6.5247",
"1.6.5248"
],
"active_committee_members": [
"1.5.0",
"1.5.1",
"1.5.2",
"1.5.3",
"1.5.4",
"1.5.5",
"1.5.6",
"1.5.7",
"1.5.8",
"1.5.9",
"1.5.10"
],
"entropy": "a04cd87714d3013b2ed9b14599ae2b2ad7b7e8e1"
}
Sounds good. May I have your p2p IP/port so that I can re-sync?Code: [Select]info
{
"head_block_num": 21949,
"head_block_id": "000055bd60276952c37246d38eee3cf9780c71d6",
"head_block_age": "0 second old",
"next_maintenance_time": "18 seconds in the future",
"chain_id": "d011922587473757011118587f93afcc314fbaea094fc1055574721b27975083",
"active_witnesses": [
"1.6.0",
"1.6.1",
"1.6.2",
"1.6.3",
"1.6.4",
"1.6.5",
"1.6.6",
"1.6.7",
"1.6.8",
"1.6.9",
"1.6.10",
"1.6.11",
"1.6.12",
"1.6.13",
"1.6.14",
"1.6.15",
"1.6.16",
"1.6.17",
"1.6.18",
"1.6.19",
"1.6.20",
"1.6.21",
"1.6.22",
"1.6.23",
"1.6.24",
"1.6.25",
"1.6.26",
"1.6.27",
"1.6.28",
"1.6.29",
"1.6.30",
"1.6.31",
"1.6.32",
"1.6.33",
"1.6.34",
"1.6.35",
"1.6.36",
"1.6.37",
"1.6.38",
"1.6.39",
"1.6.40",
"1.6.41",
"1.6.42",
"1.6.43",
"1.6.44",
"1.6.45",
"1.6.46",
"1.6.47",
"1.6.48",
"1.6.49",
"1.6.50",
"1.6.51",
"1.6.52",
"1.6.53",
"1.6.54",
"1.6.55",
"1.6.56",
"1.6.57",
"1.6.58",
"1.6.59",
"1.6.60",
"1.6.61",
"1.6.62",
"1.6.63",
"1.6.64",
"1.6.65",
"1.6.66",
"1.6.67",
"1.6.68",
"1.6.69",
"1.6.70",
"1.6.71",
"1.6.72",
"1.6.73",
"1.6.74",
"1.6.75",
"1.6.76",
"1.6.77",
"1.6.78",
"1.6.79",
"1.6.80",
"1.6.81",
"1.6.82",
"1.6.83",
"1.6.84",
"1.6.85",
"1.6.86",
"1.6.87",
"1.6.88",
"1.6.89",
"1.6.90",
"1.6.91",
"1.6.92",
"1.6.93",
"1.6.94",
"1.6.95",
"1.6.96",
"1.6.1537",
"1.6.5246",
"1.6.5247",
"1.6.5248"
],
"active_committee_members": [
"1.5.0",
"1.5.1",
"1.5.2",
"1.5.3",
"1.5.4",
"1.5.5",
"1.5.6",
"1.5.7",
"1.5.8",
"1.5.9",
"1.5.10"
],
"entropy": "a04cd87714d3013b2ed9b14599ae2b2ad7b7e8e1"
}
I'm seeing a few brief pauses occasionally that are most likely missed blocks, but no sign of a serious failure yet. I'll dig into the logs and see what I can find. Is there a command I'm missing to get delegate participation, or recent missed blocks, or list forks?
new >>> get_dynamic_global_properties
get_dynamic_global_properties
{
"id": "2.1.0",
"random": "42f2b51fa6a239642d8394d1806f9f8036b55f4b",
"head_block_number": 22475,
"head_block_id": "000057cbdb4a4e158cb39094bc7c078c86680be6",
"time": "2015-08-21T00:23:49",
"current_witness": "1.6.1",
"next_maintenance_time": "2015-08-21T00:25:00",
"witness_budget": 0,
"accounts_registered_this_interval": 0,
"recently_missed_count": 2,
"dynamic_flags": 0
}
Try connecting to 176.221.43.130:33323, that's where all the blocks I get seem to be coming from, so I'm guessing that's my peer with lowest latency to the rest of the network.Yes that's one.
Try connecting to 176.221.43.130:33323, that's where all the blocks I get seem to be coming from, so I'm guessing that's my peer with lowest latency to the rest of the network.Yes that's one.
And puppies' node 45.55.6.216:1776
But it's hard for my node to connect to them.
2015-08-21T01:57:22 p2p:message read_loop process_block_during ] Successfully pushed block 27866 (id:00006cda44c0d8b2e2566d15dfe7a2610430a774) node.cpp:3109
2015-08-21T01:57:22 p2p:message read_loop process_block_during ] client validated the block, advertising it to other peers node.cpp:3133
2015-08-21T01:57:22 p2p:message read_loop clear_old_inventory ] Expiring old inventory for peer 176.221.43.130:33323: removing 1 items advertised to peer (114 left), and 0 advertised to us (0 left) peer_connection.cpp:479
2015-08-21T01:57:22 p2p:message read_loop clear_old_inventory ] Expiring old inventory for peer 45.55.6.216:1776: removing 0 items advertised to peer (0 left), and 0 advertised to us (115 left) peer_connection.cpp:479
2015-08-21T01:57:22 p2p:advertise_inventory_loop advertise_inventory_ ] beginning an iteration of advertise inventory node.cpp:1175
2015-08-21T01:57:22 p2p:advertise_inventory_loop advertise_inventory_ ] peer->peer_needs_sync_items_from_us: false node.cpp:1188
2015-08-21T01:57:22 p2p:advertise_inventory_loop advertise_inventory_ ] inventory_to_advertise: [{"item_type":1001,"item_hash":"0a1c44a16235a09d4c84668842c85471b45e40e0"}] node.cpp:1196
2015-08-21T01:57:22 p2p:advertise_inventory_loop advertise_inventory_ ] *peer->inventory_advertised_to_peer.find(item_to_advertise): {"item":{"item_type":3624437200,"item_hash":"ff7f00002b00000000000000ffffffffff7f0000"},"timestamp":"2023-11-27T10:29:02"} node.cpp:1200
2015-08-21T01:57:22 p2p:advertise_inventory_loop advertise_inventory_ ] *peer->inventory_peer_advertised_to_us.find(item_to_advertise): {"item":{"item_type":3624438048,"item_hash":"ff7f00000000000000000000c8b007d8ff7f0000"},"timestamp":"1948-10-02T17:47:20"} node.cpp:1202
2015-08-21T01:57:22 p2p:advertise_inventory_loop advertise_inventory_ ] advertising item 0a1c44a16235a09d4c84668842c85471b45e40e0 to peer 176.221.43.130:33323 node.cpp:1212
2015-08-21T01:57:22 p2p:advertise_inventory_loop advertise_inventory_ ] advertising 1 new item(s) of 1 type(s) to peer 176.221.43.130:33323 node.cpp:1218
2015-08-21T01:57:22 p2p:advertise_inventory_loop clear_old_inventory ] Expiring old inventory for peer 176.221.43.130:33323: removing 0 items advertised to peer (115 left), and 0 advertised to us (0 left) peer_connection.cpp:479
2015-08-21T01:57:22 p2p:advertise_inventory_loop advertise_inventory_ ] peer->peer_needs_sync_items_from_us: false node.cpp:1188
2015-08-21T01:57:22 p2p:advertise_inventory_loop advertise_inventory_ ] inventory_to_advertise: [{"item_type":1001,"item_hash":"0a1c44a16235a09d4c84668842c85471b45e40e0"}] node.cpp:1196
2015-08-21T01:57:22 p2p:advertise_inventory_loop advertise_inventory_ ] *peer->inventory_advertised_to_peer.find(item_to_advertise): {"item":{"item_type":3623881952,"item_hash":"ff7f0000000000000000000068a305d8ff7f0000"},"timestamp":"1948-09-25T19:23:04"} node.cpp:1200
2015-08-21T01:57:22 p2p:advertise_inventory_loop advertise_inventory_ ] advertising 0 new item(s) of 0 type(s) to peer 45.55.6.216:1776 node.cpp:1218
2015-08-21T01:57:22 p2p:advertise_inventory_loop clear_old_inventory ] Expiring old inventory for peer 45.55.6.216:1776: removing 0 items advertised to peer (0 left), and 0 advertised to us (115 left) peer_connection.cpp:479
2015-08-21T01:57:22 p2p:advertise_inventory_loop send_message ] peer_connection::send_message() enqueueing message of type 5001 for peer 176.221.43.130:33323 peer_connection.cpp:365
2015-08-21T01:57:22 p2p:advertise_inventory_loop send_queueable_messa ] peer_connection::send_message() is firing up send_queued_message_task peer_connection.cpp:354
2015-08-21T01:57:22 p2p:fetch_items_loop fetch_items_loop ] beginning an iteration of fetch items (0 items to fetch) node.cpp:1104
2015-08-21T01:57:22 p2p:send_queued_messages_task counter ] entering peer_connection::send_queued_messages_task() peer_connection.cpp:279
2015-08-21T01:57:22 p2p:send_queued_messages_task send_queued_messages ] peer_connection::send_queued_messages_task() calling message_oriented_connection::send_message() to send message of type 5001 for peer 176.221.43.130:33323 peer_connection.cpp:291
2015-08-21T01:57:22 p2p:send_queued_messages_task send_queued_messages ] peer_connection::send_queued_messages_task()'s call to message_oriented_connection::send_message() completed normally for peer 176.221.43.130:33323 peer_connection.cpp:294
2015-08-21T01:57:22 p2p:send_queued_messages_task send_queued_messages ] leaving peer_connection::send_queued_messages_task() due to queue exhaustion peer_connection.cpp:326
2015-08-21T01:57:22 p2p:send_queued_messages_task ~counter ] leaving peer_connection::send_queued_messages_task() peer_connection.cpp:280
2015-08-21T01:57:22 p2p:connect_to_task connect_to ] established outbound connection to 114.92.254.159:62015 peer_connection.cpp:251
2015-08-21T01:57:22 p2p:connect_to_task send_message ] peer_connection::send_message() enqueueing message of type 5006 for peer 114.92.254.159:62015 peer_connection.cpp:365
2015-08-21T01:57:22 p2p:connect_to_task send_queueable_messa ] peer_connection::send_message() is firing up send_queued_message_task peer_connection.cpp:354
2015-08-21T01:57:22 p2p:connect_to_task connect_to_task ] Sent "hello" to peer 114.92.254.159:62015 node.cpp:4093
2015-08-21T01:57:22 p2p:send_queued_messages_task counter ] entering peer_connection::send_queued_messages_task() peer_connection.cpp:279
2015-08-21T01:57:22 p2p:send_queued_messages_task send_queued_messages ] peer_connection::send_queued_messages_task() calling message_oriented_connection::send_message() to send message of type 5006 for peer 114.92.254.159:62015 peer_connection.cpp:291
2015-08-21T01:57:22 p2p:send_queued_messages_task send_queued_messages ] peer_connection::send_queued_messages_task()'s call to message_oriented_connection::send_message() completed normally for peer 114.92.254.159:62015 peer_connection.cpp:294
2015-08-21T01:57:22 p2p:send_queued_messages_task send_queued_messages ] leaving peer_connection::send_queued_messages_task() due to queue exhaustion peer_connection.cpp:326
2015-08-21T01:57:22 p2p:send_queued_messages_task ~counter ] leaving peer_connection::send_queued_messages_task() peer_connection.cpp:280
2015-08-21T01:57:23 p2p:message read_loop on_message ] handling message hello_message_type 1ee7de2b2696aece8234a8c1198d21bbee1d4e29 size 528 from peer 114.92.254.159:62015 node.cpp:1651
2015-08-21T01:57:23 p2p:message read_loop send_message ] peer_connection::send_message() enqueueing message of type 5007 for peer 114.92.254.159:62015 peer_connection.cpp:365
2015-08-21T01:57:23 p2p:message read_loop send_queueable_messa ] peer_connection::send_message() is firing up send_queued_message_task peer_connection.cpp:354
2015-08-21T01:57:23 p2p:message read_loop on_hello_message ] Received a hello_message from peer 114.92.254.159:62015, sending reply to accept connection node.cpp:1967
2015-08-21T01:57:23 p2p:send_queued_messages_task counter ] entering peer_connection::send_queued_messages_task() peer_connection.cpp:279
2015-08-21T01:57:23 p2p:send_queued_messages_task send_queued_messages ] peer_connection::send_queued_messages_task() calling message_oriented_connection::send_message() to send message of type 5007 for peer 114.92.254.159:62015 peer_connection.cpp:291
2015-08-21T01:57:23 p2p:send_queued_messages_task send_queued_messages ] peer_connection::send_queued_messages_task()'s call to message_oriented_connection::send_message() completed normally for peer 114.92.254.159:62015 peer_connection.cpp:294
2015-08-21T01:57:23 p2p:send_queued_messages_task send_queued_messages ] leaving peer_connection::send_queued_messages_task() due to queue exhaustion peer_connection.cpp:326
2015-08-21T01:57:23 p2p:send_queued_messages_task ~counter ] leaving peer_connection::send_queued_messages_task() peer_connection.cpp:280
2015-08-21T01:57:23 p2p:message read_loop on_message ] handling message item_ids_inventory_message_type 161f660391f8a231b70a77010704f124db1196af size 25 from peer 45.55.6.216:1776 node.cpp:1651
2015-08-21T01:57:23 p2p:message read_loop clear_old_inventory ] Expiring old inventory for peer 45.55.6.216:1776: removing 0 items advertised to peer (0 left), and 1 advertised to us (114 left) peer_connection.cpp:479
2015-08-21T01:57:23 p2p:message read_loop on_item_ids_inventor ] received inventory of 1 items from peer 45.55.6.216:1776 node.cpp:2613
2015-08-21T01:57:23 p2p:message read_loop on_item_ids_inventor ] adding item 07eb3cbe0052b4a4ee0ed8dec43655b6965f0b1d from inventory message to our list of items to fetch node.cpp:2647
2015-08-21T01:57:23 p2p:fetch_items_loop fetch_items_loop ] beginning an iteration of fetch items (1 items to fetch) node.cpp:1104
2015-08-21T01:57:23 p2p:fetch_items_loop fetch_items_loop ] requesting item 07eb3cbe0052b4a4ee0ed8dec43655b6965f0b1d from peer 45.55.6.216:1776 node.cpp:1123
2015-08-21T01:57:23 p2p:fetch_items_loop send_message ] peer_connection::send_message() enqueueing message of type 5004 for peer 45.55.6.216:1776 peer_connection.cpp:365
2015-08-21T01:57:23 p2p:fetch_items_loop send_queueable_messa ] peer_connection::send_message() is firing up send_queued_message_task peer_connection.cpp:354
2015-08-21T01:57:23 p2p:send_queued_messages_task counter ] entering peer_connection::send_queued_messages_task() peer_connection.cpp:279
2015-08-21T01:57:23 p2p:send_queued_messages_task send_queued_messages ] peer_connection::send_queued_messages_task() calling message_oriented_connection::send_message() to send message of type 5004 for peer 45.55.6.216:1776 peer_connection.cpp:291
2015-08-21T01:57:23 p2p:send_queued_messages_task send_queued_messages ] peer_connection::send_queued_messages_task()'s call to message_oriented_connection::send_message() completed normally for peer 45.55.6.216:1776 peer_connection.cpp:294
2015-08-21T01:57:23 p2p:send_queued_messages_task send_queued_messages ] leaving peer_connection::send_queued_messages_task() due to queue exhaustion peer_connection.cpp:326
2015-08-21T01:57:23 p2p:send_queued_messages_task ~counter ] leaving peer_connection::send_queued_messages_task() peer_connection.cpp:280
2015-08-21T01:57:23 p2p:message read_loop on_message ] handling message block_message_type 07eb3cbe0052b4a4ee0ed8dec43655b6965f0b1d size 172 from peer 45.55.6.216:1776 node.cpp:1651
2015-08-21T01:57:23 p2p:message read_loop process_block_during ] received a block from peer 45.55.6.216:1776, passing it to client node.cpp:3087
2015-08-21T01:57:23 p2p:message read_loop process_block_during ] Failed to push block 27867 (id:00006cdb2f8937565fe6b272e3b43073be37a0de), client rejected block sent by peer node.cpp:3198
and a little while later1671470ms th_a application.cpp:348 handle_block ] Got block #29623 from network
1672475ms th_a application.cpp:348 handle_block ] Got block #29624 from network
1675558ms th_a application.cpp:348 handle_block ] Got block #29627 from network
1675558ms th_a fork_database.cpp:57 push_block ] Pushing block to fork database that failed to link.
1675722ms th_a application.cpp:348 handle_block ] Got block #29626 from network
1675722ms th_a fork_database.cpp:57 push_block ] Pushing block to fork database that failed to link.
1676472ms th_a application.cpp:348 handle_block ] Got block #29628 from network
1676472ms th_a fork_database.cpp:57 push_block ] Pushing block to fork database that failed to link.
1677471ms th_a application.cpp:348 handle_block ] Got block #29629 from network
1677471ms th_a fork_database.cpp:57 push_block ] Pushing block to fork database that failed to link.
1678473ms th_a application.cpp:348 handle_block ] Got block #29630 from network
My home node keeps losing sync.Code: [Select]2015-08-21T01:57:22 p2p:message read_loop process_block_during ] Successfully pushed block 27866 (id:00006cda44c0d8b2e2566d15dfe7a2610430a774) node.cpp:3109
2015-08-21T01:57:22 p2p:message read_loop process_block_during ] client validated the block, advertising it to other peers node.cpp:3133
2015-08-21T01:57:22 p2p:message read_loop clear_old_inventory ] Expiring old inventory for peer 176.221.43.130:33323: removing 1 items advertised to peer (114 left), and 0 advertised to us (0 left) peer_connection.cpp:479
2015-08-21T01:57:22 p2p:message read_loop clear_old_inventory ] Expiring old inventory for peer 45.55.6.216:1776: removing 0 items advertised to peer (0 left), and 0 advertised to us (115 left) peer_connection.cpp:479
2015-08-21T01:57:22 p2p:advertise_inventory_loop advertise_inventory_ ] beginning an iteration of advertise inventory node.cpp:1175
2015-08-21T01:57:22 p2p:advertise_inventory_loop advertise_inventory_ ] peer->peer_needs_sync_items_from_us: false node.cpp:1188
2015-08-21T01:57:22 p2p:advertise_inventory_loop advertise_inventory_ ] inventory_to_advertise: [{"item_type":1001,"item_hash":"0a1c44a16235a09d4c84668842c85471b45e40e0"}] node.cpp:1196
2015-08-21T01:57:22 p2p:advertise_inventory_loop advertise_inventory_ ] *peer->inventory_advertised_to_peer.find(item_to_advertise): {"item":{"item_type":3624437200,"item_hash":"ff7f00002b00000000000000ffffffffff7f0000"},"timestamp":"2023-11-27T10:29:02"} node.cpp:1200
2015-08-21T01:57:22 p2p:advertise_inventory_loop advertise_inventory_ ] *peer->inventory_peer_advertised_to_us.find(item_to_advertise): {"item":{"item_type":3624438048,"item_hash":"ff7f00000000000000000000c8b007d8ff7f0000"},"timestamp":"1948-10-02T17:47:20"} node.cpp:1202
2015-08-21T01:57:22 p2p:advertise_inventory_loop advertise_inventory_ ] advertising item 0a1c44a16235a09d4c84668842c85471b45e40e0 to peer 176.221.43.130:33323 node.cpp:1212
2015-08-21T01:57:22 p2p:advertise_inventory_loop advertise_inventory_ ] advertising 1 new item(s) of 1 type(s) to peer 176.221.43.130:33323 node.cpp:1218
2015-08-21T01:57:22 p2p:advertise_inventory_loop clear_old_inventory ] Expiring old inventory for peer 176.221.43.130:33323: removing 0 items advertised to peer (115 left), and 0 advertised to us (0 left) peer_connection.cpp:479
2015-08-21T01:57:22 p2p:advertise_inventory_loop advertise_inventory_ ] peer->peer_needs_sync_items_from_us: false node.cpp:1188
2015-08-21T01:57:22 p2p:advertise_inventory_loop advertise_inventory_ ] inventory_to_advertise: [{"item_type":1001,"item_hash":"0a1c44a16235a09d4c84668842c85471b45e40e0"}] node.cpp:1196
2015-08-21T01:57:22 p2p:advertise_inventory_loop advertise_inventory_ ] *peer->inventory_advertised_to_peer.find(item_to_advertise): {"item":{"item_type":3623881952,"item_hash":"ff7f0000000000000000000068a305d8ff7f0000"},"timestamp":"1948-09-25T19:23:04"} node.cpp:1200
2015-08-21T01:57:22 p2p:advertise_inventory_loop advertise_inventory_ ] advertising 0 new item(s) of 0 type(s) to peer 45.55.6.216:1776 node.cpp:1218
2015-08-21T01:57:22 p2p:advertise_inventory_loop clear_old_inventory ] Expiring old inventory for peer 45.55.6.216:1776: removing 0 items advertised to peer (0 left), and 0 advertised to us (115 left) peer_connection.cpp:479
2015-08-21T01:57:22 p2p:advertise_inventory_loop send_message ] peer_connection::send_message() enqueueing message of type 5001 for peer 176.221.43.130:33323 peer_connection.cpp:365
2015-08-21T01:57:22 p2p:advertise_inventory_loop send_queueable_messa ] peer_connection::send_message() is firing up send_queued_message_task peer_connection.cpp:354
2015-08-21T01:57:22 p2p:fetch_items_loop fetch_items_loop ] beginning an iteration of fetch items (0 items to fetch) node.cpp:1104
2015-08-21T01:57:22 p2p:send_queued_messages_task counter ] entering peer_connection::send_queued_messages_task() peer_connection.cpp:279
2015-08-21T01:57:22 p2p:send_queued_messages_task send_queued_messages ] peer_connection::send_queued_messages_task() calling message_oriented_connection::send_message() to send message of type 5001 for peer 176.221.43.130:33323 peer_connection.cpp:291
2015-08-21T01:57:22 p2p:send_queued_messages_task send_queued_messages ] peer_connection::send_queued_messages_task()'s call to message_oriented_connection::send_message() completed normally for peer 176.221.43.130:33323 peer_connection.cpp:294
2015-08-21T01:57:22 p2p:send_queued_messages_task send_queued_messages ] leaving peer_connection::send_queued_messages_task() due to queue exhaustion peer_connection.cpp:326
2015-08-21T01:57:22 p2p:send_queued_messages_task ~counter ] leaving peer_connection::send_queued_messages_task() peer_connection.cpp:280
2015-08-21T01:57:22 p2p:connect_to_task connect_to ] established outbound connection to 114.92.254.159:62015 peer_connection.cpp:251
2015-08-21T01:57:22 p2p:connect_to_task send_message ] peer_connection::send_message() enqueueing message of type 5006 for peer 114.92.254.159:62015 peer_connection.cpp:365
2015-08-21T01:57:22 p2p:connect_to_task send_queueable_messa ] peer_connection::send_message() is firing up send_queued_message_task peer_connection.cpp:354
2015-08-21T01:57:22 p2p:connect_to_task connect_to_task ] Sent "hello" to peer 114.92.254.159:62015 node.cpp:4093
2015-08-21T01:57:22 p2p:send_queued_messages_task counter ] entering peer_connection::send_queued_messages_task() peer_connection.cpp:279
2015-08-21T01:57:22 p2p:send_queued_messages_task send_queued_messages ] peer_connection::send_queued_messages_task() calling message_oriented_connection::send_message() to send message of type 5006 for peer 114.92.254.159:62015 peer_connection.cpp:291
2015-08-21T01:57:22 p2p:send_queued_messages_task send_queued_messages ] peer_connection::send_queued_messages_task()'s call to message_oriented_connection::send_message() completed normally for peer 114.92.254.159:62015 peer_connection.cpp:294
2015-08-21T01:57:22 p2p:send_queued_messages_task send_queued_messages ] leaving peer_connection::send_queued_messages_task() due to queue exhaustion peer_connection.cpp:326
2015-08-21T01:57:22 p2p:send_queued_messages_task ~counter ] leaving peer_connection::send_queued_messages_task() peer_connection.cpp:280
2015-08-21T01:57:23 p2p:message read_loop on_message ] handling message hello_message_type 1ee7de2b2696aece8234a8c1198d21bbee1d4e29 size 528 from peer 114.92.254.159:62015 node.cpp:1651
2015-08-21T01:57:23 p2p:message read_loop send_message ] peer_connection::send_message() enqueueing message of type 5007 for peer 114.92.254.159:62015 peer_connection.cpp:365
2015-08-21T01:57:23 p2p:message read_loop send_queueable_messa ] peer_connection::send_message() is firing up send_queued_message_task peer_connection.cpp:354
2015-08-21T01:57:23 p2p:message read_loop on_hello_message ] Received a hello_message from peer 114.92.254.159:62015, sending reply to accept connection node.cpp:1967
2015-08-21T01:57:23 p2p:send_queued_messages_task counter ] entering peer_connection::send_queued_messages_task() peer_connection.cpp:279
2015-08-21T01:57:23 p2p:send_queued_messages_task send_queued_messages ] peer_connection::send_queued_messages_task() calling message_oriented_connection::send_message() to send message of type 5007 for peer 114.92.254.159:62015 peer_connection.cpp:291
2015-08-21T01:57:23 p2p:send_queued_messages_task send_queued_messages ] peer_connection::send_queued_messages_task()'s call to message_oriented_connection::send_message() completed normally for peer 114.92.254.159:62015 peer_connection.cpp:294
2015-08-21T01:57:23 p2p:send_queued_messages_task send_queued_messages ] leaving peer_connection::send_queued_messages_task() due to queue exhaustion peer_connection.cpp:326
2015-08-21T01:57:23 p2p:send_queued_messages_task ~counter ] leaving peer_connection::send_queued_messages_task() peer_connection.cpp:280
2015-08-21T01:57:23 p2p:message read_loop on_message ] handling message item_ids_inventory_message_type 161f660391f8a231b70a77010704f124db1196af size 25 from peer 45.55.6.216:1776 node.cpp:1651
2015-08-21T01:57:23 p2p:message read_loop clear_old_inventory ] Expiring old inventory for peer 45.55.6.216:1776: removing 0 items advertised to peer (0 left), and 1 advertised to us (114 left) peer_connection.cpp:479
2015-08-21T01:57:23 p2p:message read_loop on_item_ids_inventor ] received inventory of 1 items from peer 45.55.6.216:1776 node.cpp:2613
2015-08-21T01:57:23 p2p:message read_loop on_item_ids_inventor ] adding item 07eb3cbe0052b4a4ee0ed8dec43655b6965f0b1d from inventory message to our list of items to fetch node.cpp:2647
2015-08-21T01:57:23 p2p:fetch_items_loop fetch_items_loop ] beginning an iteration of fetch items (1 items to fetch) node.cpp:1104
2015-08-21T01:57:23 p2p:fetch_items_loop fetch_items_loop ] requesting item 07eb3cbe0052b4a4ee0ed8dec43655b6965f0b1d from peer 45.55.6.216:1776 node.cpp:1123
2015-08-21T01:57:23 p2p:fetch_items_loop send_message ] peer_connection::send_message() enqueueing message of type 5004 for peer 45.55.6.216:1776 peer_connection.cpp:365
2015-08-21T01:57:23 p2p:fetch_items_loop send_queueable_messa ] peer_connection::send_message() is firing up send_queued_message_task peer_connection.cpp:354
2015-08-21T01:57:23 p2p:send_queued_messages_task counter ] entering peer_connection::send_queued_messages_task() peer_connection.cpp:279
2015-08-21T01:57:23 p2p:send_queued_messages_task send_queued_messages ] peer_connection::send_queued_messages_task() calling message_oriented_connection::send_message() to send message of type 5004 for peer 45.55.6.216:1776 peer_connection.cpp:291
2015-08-21T01:57:23 p2p:send_queued_messages_task send_queued_messages ] peer_connection::send_queued_messages_task()'s call to message_oriented_connection::send_message() completed normally for peer 45.55.6.216:1776 peer_connection.cpp:294
2015-08-21T01:57:23 p2p:send_queued_messages_task send_queued_messages ] leaving peer_connection::send_queued_messages_task() due to queue exhaustion peer_connection.cpp:326
2015-08-21T01:57:23 p2p:send_queued_messages_task ~counter ] leaving peer_connection::send_queued_messages_task() peer_connection.cpp:280
2015-08-21T01:57:23 p2p:message read_loop on_message ] handling message block_message_type 07eb3cbe0052b4a4ee0ed8dec43655b6965f0b1d size 172 from peer 45.55.6.216:1776 node.cpp:1651
2015-08-21T01:57:23 p2p:message read_loop process_block_during ] received a block from peer 45.55.6.216:1776, passing it to client node.cpp:3087
2015-08-21T01:57:23 p2p:message read_loop process_block_during ] Failed to push block 27867 (id:00006cdb2f8937565fe6b272e3b43073be37a0de), client rejected block sent by peer node.cpp:3198Code: [Select]1671470ms th_a application.cpp:348 handle_block ] Got block #29623 from network
1672475ms th_a application.cpp:348 handle_block ] Got block #29624 from network
1675558ms th_a application.cpp:348 handle_block ] Got block #29627 from network
1675558ms th_a fork_database.cpp:57 push_block ] Pushing block to fork database that failed to link.
1675722ms th_a application.cpp:348 handle_block ] Got block #29626 from network
1675722ms th_a fork_database.cpp:57 push_block ] Pushing block to fork database that failed to link.
1676472ms th_a application.cpp:348 handle_block ] Got block #29628 from network
1676472ms th_a fork_database.cpp:57 push_block ] Pushing block to fork database that failed to link.
1677471ms th_a application.cpp:348 handle_block ] Got block #29629 from network
1677471ms th_a fork_database.cpp:57 push_block ] Pushing block to fork database that failed to link.
1678473ms th_a application.cpp:348 handle_block ] Got block #29630 from network
get_block 27867
{
"previous": "00006cda44c0d8b2e2566d15dfe7a2610430a774",
"timestamp": "2015-08-21T01:57:23",
"witness": "1.6.71",
"next_secret_hash": "74adc395b0e7aff2e854300a0d0e913fc83da333",
"previous_secret": "2c2d4cb66202a45f75a7fd8876f4886694ee7bf5",
"transaction_merkle_root": "0000000000000000000000000000000000000000",
"extensions": [],
"witness_signature": "1f14fa5a625f2e7f21a2fb80dff311f2da97da5834f7f21718dde7d904c848043e04fa596927d006090356854d0d19bd0e741b0f186c8c54f0070eddd2635e5608",
"transactions": []
}
new >>> get_dynamic_global_properties
get_dynamic_global_properties
{
"id": "2.1.0",
"random": "b8c6bfb33a7c6806f4d1d02784f8020891935417",
"head_block_number": 29773,
"head_block_id": "0000744d5eb7d745a0de9541172ce4898de459c9",
"time": "2015-08-21T02:30:29",
"current_witness": "1.6.6",
"next_maintenance_time": "2015-08-21T02:35:00",
"witness_budget": 96821279,
"accounts_registered_this_interval": 0,
"recently_missed_count": 0,
"dynamic_flags": 0
}
get_block 27868
{
"previous": "00006cdb2f8937565fe6b272e3b43073be37a0de",
"timestamp": "2015-08-21T01:57:24",
"witness": "1.6.78",
"next_secret_hash": "202f2a4a975f81e64b6caa1d46f9423ca1cbdde4",
"previous_secret": "93d75a091fe435d5e92e66c29f2c55ae9449ea56",
"transaction_merkle_root": "0000000000000000000000000000000000000000",
"extensions": [],
"witness_signature": "1f1b27b0e3b05b097eb773e11306513f46b90ca1f2f5c394a54836533d988887471813ddf0c4526687b13d2edbb43b2e2389b75245e4c48d4e3ab78f9015a1287a",
"transactions": []
}
th_a db_block.cpp:176 _push_block
2697067ms th_a witness.cpp:240 block_production_loo ] slot: 653 scheduled_witness: 1.6.6 scheduled_time: 2015-08-21T02:44:57 now: 2015-08-21T02:44:57
2697483ms th_a application.cpp:348 handle_block ] Got block #30610 from network
2697483ms th_a application.cpp:370 handle_block ] Error when pushing block:
10 assert_exception: Assert Exception
item->num < _head->num + MAX_BLOCK_REORDERING:
{}
th_a fork_database.cpp:70 _push_block
{"new_block":{"previous":"00007791ba405833b20a74e788cfc1c3e5b2909e","timestamp":"2015-08-21T02:44:57","witness":"1.6.0","next_secret_hash":"d0120d9cae36b654773f0285cd27224f5a66882b","previous_secret":"5dc2e1ffee2e6df079a5da200c9f6a0c89f0ab73","transaction_merkle_root":"0000000000000000000000000000000000000000","extensions":[],"witness_signature":"1f5a9dfeeb9eadcee4d6171a7f615476ef0f441faf3c4193d8b0febae1fea965ac1ca260a72fff57b5c5de4b6af62bddf3fe22c6a43c6a6f23ba456de8c2711453","transactions":[]}}
th_a db_block.cpp:176 _push_block
2697701ms th_a application.cpp:348 handle_block ] Got block #30610 from network
2697702ms th_a application.cpp:370 handle_block ] Error when pushing block:
10 assert_exception: Assert Exception
item->num < _head->num + MAX_BLOCK_REORDERING:
{}
th_a fork_database.cpp:70 _push_block
{"new_block":{"previous":"00007791ba405833b20a74e788cfc1c3e5b2909e","timestamp":"2015-08-21T02:44:57","witness":"1.6.0","next_secret_hash":"d0120d9cae36b654773f0285cd27224f5a66882b","previous_secret":"5dc2e1ffee2e6df079a5da200c9f6a0c89f0ab73","transaction_merkle_root":"0000000000000000000000000000000000000000","extensions":[],"witness_signature":"1f5a9dfeeb9eadcee4d6171a7f615476ef0f441faf3c4193d8b0febae1fea965ac1ca260a72fff57b5c5de4b6af62bddf3fe22c6a43c6a6f23ba456de8c2711453","transactions":[]}}
th_a db_block.cpp:176 _push_block
2698061ms th_a witness.cpp:240 block_production_loo ] slot: 654 scheduled_witness: 1.6.60 scheduled_time: 2015-08-21T02:44:58 now: 2015-08-21T02:44:58
2698483ms th_a application.cpp:348 handle_block ] Got block #30611 from network
2698483ms th_a application.cpp:370 handle_block ] Error when pushing block:
10 assert_exception: Assert Exception
item->num < _head->num + MAX_BLOCK_REORDERING:
{}
th_a fork_database.cpp:70 _push_block
{"new_block":{"previous":"000077927c5de70811b512694832fc9d12f36eb0","timestamp":"2015-08-21T02:44:58","witness":"1.6.93","next_secret_hash":"fc999ce474e5b02294da89b547e132814b8ec344","previous_secret":"48ac013a4ad1d99ad366e527c8fb1fbca696a4ee","transaction_merkle_root":"0000000000000000000000000000000000000000","extensions":[],"witness_signature":"1f7630ab0ddb7677a285515e8de19f66c6d649ed1ac7cc486c730e99633455d7c23ea243286ec1ef4100276f8e317f9cccbd01cb1a2556ed397e0c82a4e85a6275","transactions":[]}}
th_a db_block.cpp:176 _push_block
2698703ms th_a application.cpp:348 handle_block ] Got block #30611 from network
2698703ms th_a application.cpp:370 handle_block ] Error when pushing block:
10 assert_exception: Assert Exception
item->num < _head->num + MAX_BLOCK_REORDERING:
{}
th_a fork_database.cpp:70 _push_block
{"new_block":{"previous":"000077927c5de70811b512694832fc9d12f36eb0","timestamp":"2015-08-21T02:44:58","witness":"1.6.93","next_secret_hash":"fc999ce474e5b02294da89b547e132814b8ec344","previous_secret":"48ac013a4ad1d99ad366e527c8fb1fbca696a4ee","transaction_merkle_root":"0000000000000000000000000000000000000000","extensions":[],"witness_signature":"1f7630ab0ddb7677a285515e8de19f66c6d649ed1ac7cc486c730e99633455d7c23ea243286ec1ef4100276f8e317f9cccbd01cb1a2556ed397e0c82a4e85a6275","transactions":[]}}
th_a db_block.cpp:176 _push_block
And it keeps happening over and over again on that node. Meanwhile my vps is chugging along. Is there a way to turn on more logging options. All I currently see is the p2p.log
Lots and lots of this from my most recent
...
# declare an appender named "stderr" that writes messages to the console
[log.console_appender.stderr]
stream=std_error
# declare an appender named "p2p" that writes messages to p2p.log
[log.file_appender.p2p]
filename=logs/p2p/p2p.log
# filename can be absolute or relative to this config file
# route any messages logged to the default logger to the "stderr" logger we
# declared above, if they are info level are higher
[logger.default]
level=info
appenders=stderr
# route messages sent to the "p2p" logger to the p2p appender declared above
[logger.p2p]
level=debug
appenders=p2p
# declare an appender named "stderr" that writes messages to the console
[log.file_appender.stderr]
filename=logs/stderr/stderr.log
# declare an appender named "p2p" that writes messages to p2p.log
[log.file_appender.p2p]
filename=logs/p2p/p2p.log
# filename can be absolute or relative to this config file
# route any messages logged to the default logger to the "stderr" logger we
# declared above, if they are info level are higher
[logger.default]
level=info
appenders=stderr
# route messages sent to the "p2p" logger to the p2p appender declared above
[logger.p2p]
level=debug
appenders=p2p
Try connecting to 176.221.43.130:33323, that's where all the blocks I get seem to be coming from, so I'm guessing that's my peer with lowest latency to the rest of the network.
Thanks. I am running with the new logging now.
Interestingly enough those changes prevent anything from showing up in the terminal
# declare an appender named "stderr" that writes messages to the console
[log.console_appender.stderr]
stream=std_error
# declare an appender named "p2p" that writes messages to p2p.log
[log.file_appender.p2p]
filename=logs/p2p/p2p.log
# filename can be absolute or relative to this config file
# route any messages logged to the default logger to the "stderr" logger we
# declared above, if they are info level are higher
[logger.default]
level=info
appenders=stderr
# route messages sent to the "p2p" logger to the p2p appender declared above
[logger.p2p]
level=debug
appenders=p2p,stderr
921000ms th_a witness.cpp:240 block_production_loo ] slot: 2 scheduled_witness: 1.6.93 scheduled_time: 2015-08-21T03:15:21 now: 2015-08-21T03:15:21
922000ms th_a witness.cpp:240 block_production_loo ] slot: 3 scheduled_witness: 1.6.1537 scheduled_time: 2015-08-21T03:15:22 now: 2015-08-1T03:15:22
922000ms th_a witness.cpp:243 block_production_loo ] Witness 1.6.1537 production slot has arrived; generating a block now...
922019ms th_a witness.cpp:256 block_production_loo ] Generated block #32355 with timestamp 2015-08-21T03:15:22 at time 2015-08-21T03:15:22
923000ms th_a witness.cpp:240 block_production_loo ] slot: 1 scheduled_witness: 1.6.21 scheduled_time: 2015-08-21T03:15:23 now: 2015-08-21T03:15:23
924000ms th_a witness.cpp:240 block_production_loo ] slot: 2 scheduled_witness: 1.6.37 scheduled_time: 2015-08-21T03:15:24 now: 2015-08-21T03:15:24
925000ms th_a witness.cpp:240 block_production_loo ] slot: 3 scheduled_witness: 1.6.96 scheduled_time: 2015-08-21T03:15:25 now: 2015-08-21T03:15:25
926000ms th_a witness.cpp:240 block_production_loo ] slot: 4 scheduled_witness: 1.6.14 scheduled_time: 2015-08-21T03:15:26 now: 2015-08-21T03:15:26
927000ms th_a witness.cpp:240 block_production_loo ] slot: 5 scheduled_witness: 1.6.60 scheduled_time: 2015-08-21T03:15:27 now: 2015-08-21T03:15:27
928000ms th_a witness.cpp:240 block_production_loo ] slot: 6 scheduled_witness: 1.6.32 scheduled_time: 2015-08-21T03:15:28 now: 2015-08-21T03:15:28
929000ms th_a witness.cpp:240 block_production_loo ] slot: 7 scheduled_witness: 1.6.84 scheduled_time: 2015-08-21T03:15:29 now: 2015-08-21T03:15:29
930000ms th_a witness.cpp:240 block_production_loo ] slot: 8 scheduled_witness: 1.6.26 scheduled_time: 2015-08-21T03:15:30 now: 2015-08-21T03:15:30
931000ms th_a witness.cpp:240 block_production_loo ] slot: 9 scheduled_witness: 1.6.76 scheduled_time: 2015-08-21T03:15:31 now: 2015-08-21T03:15:31
931268ms th_a application.cpp:492 get_blockchain_synop ] reference_point: 0000000000000000000000000000000000000000 number_of_blocks_after_reference_point: 0 result: ["00003e63934b84f27b5a0afde42a7627fda771d4","00005e63bcf5fb27247d2a611a50f0b5fa559755","00006e633c250ba066e1848bf2907518cd81febf","000076635430a274998a8ffe6e76d9895cb36042","00007a63bc5b0d4009fc46e0572bbeed45b838a6","00007c6304c038950fa23836fd3be4c98296850f","00007d633081235cec1462b6954df2dea5aaa55b","00007de36fb03c220c6cdfa4b367f16f122f9da0","00007e230bf4f5b3813f0c28df1b2b73b2d0c8fc","00007e43af64f90d3c03fff6ce2f7cfbc0edfcf2","00007e53cbc14d88598fe80fa4250164f3d87272","00007e5b37b41f58639a14686db50a9d415332fe","00007e5f84a48d00a63bf9efa6a7748add0bad43","00007e6193119832728d72fd742288c7919233bf","00007e624f4b577b2f9bdffde6666cdbbb96345e","00007e6386f99dd87b9ef26cbe7704723ce60e03"]
932000ms th_a witness.cpp:240 block_production_loo ] slot: 10 scheduled_witness: 1.6.55 scheduled_time: 2015-08-21T03:15:32 now: 2015-08-21T03:15:32
932331ms th_a application.cpp:443 get_item ] Request for item {"item_type":1001,"item_hash":"00007e6386f99dd87b9ef26cbe7704723ce60e03"}
932331ms th_a application.cpp:451 get_item ] Serving up block #32355
932333ms th_a application.cpp:443 get_item ] Request for item {"item_type":1001,"item_hash":"00007e6386f99dd87b9ef26cbe7704723ce60e03"}
932333ms th_a application.cpp:451 get_item ] Serving up block #32355
932352ms th_a application.cpp:370 handle_block ] Error when pushing block:
10 assert_exception: Assert Exception
!_stack.empty():
{}
th_a undo_database.cpp:155 pop_commit
{}
th_a object_database.cpp:102 pop_undo
{}
th_a db_block.cpp:329 pop_block
{"new_block":{"previous":"00007e63713b745a74f3988377f6b520e1fe1361","timestamp":"2015-08
-21T03:15:21","witness":"1.6.93","next_secret_hash":"5bb6e0b4cd3a80a374a284ad762aa12fb0ae564
5","previous_secret":"5a45616623bb4cfe020e8ca712aa77067b41a8d1","transaction_merkle_root":"5
01b214c34702ed6be5410f66e464c358c3bc30a","extensions":[],"witness_signature":"205b6985a09cd3
b66500a1189fa6deae7133a80a492c7984b0414aef957ec9b53022851a3bc4a48da945f3c699de19b1c9d35b3bb0
33e6a029939e3837ab70148a","transactions":[{"ref_block_num":32351,"ref_block_prefix":9282692,
"expiration":"2015-08-21T03:15:49","operations":[[0,{"fee":{"amount":2000000,"asset_id":"1.3
.0"},"from":"1.2.22309","to":"1.2.63354","amount":{"amount":100000,"asset_id":"1.3.0"},"exte
nsions":[]}]],"extensions":[],"signatures":["1f0b1a1ad0ab2a3df9f0f4019aa6237cebbf1f982786e5f
9aa1ce356b1af2a77c02368be41a9079060b8b74100fbbe631e6290a4d785a88da362a4215eb472fe34"],"opera
tion_results":[[0,{}]]}]}}
th_a db_block.cpp:176 _push_block
1503270ms p2p node.cpp:4404 dump_node_status ] ----------------- PEER STATUS UPDATE --------------------
1503270ms p2p node.cpp:4407 dump_node_status ] number of peers: 1 active, 0, 0 closing. attempting to maintain 20 - 200 peers
1503271ms p2p node.cpp:4412 dump_node_status ] active peer 176.221.43.130:33323 peer_is_in_sync_with_us:true we_are_in_sync_with_peer:true
1503271ms p2p node.cpp:4425 dump_node_status ] --------- MEMORY USAGE ------------
1503271ms p2p node.cpp:4426 dump_node_status ] node._active_sync_requests size: 0
1503271ms p2p node.cpp:4427 dump_node_status ] node._received_sync_items size: 0
1503271ms p2p node.cpp:4428 dump_node_status ] node._new_received_sync_items size: 0
1503271ms p2p node.cpp:4429 dump_node_status ] node._items_to_fetch size: 0
1503271ms p2p node.cpp:4430 dump_node_status ] node._new_inventory size: 0
1503271ms p2p node.cpp:4431 dump_node_status ] node._message_cache size: 2
1503271ms p2p node.cpp:4434 dump_node_status ] peer 176.221.43.130:33323
1503272ms p2p node.cpp:4435 dump_node_status ] peer.ids_of_items_to_get size: 0
1503272ms p2p node.cpp:4436 dump_node_status ] peer.inventory_peer_advertised_to_us size: 117
1503272ms p2p node.cpp:4437 dump_node_status ] peer.inventory_advertised_to_peer size: 0
1503272ms p2p node.cpp:4438 dump_node_status ] peer.items_requested_from_peer size: 0
1503272ms p2p node.cpp:4439 dump_node_status ] peer.sync_items_requested_from_peer size: 0
1503272ms p2p node.cpp:4441 dump_node_status ] --------- END MEMORY USAGE ------------
2015-08-21T03:09:24 th_a:invoke handle_block handle_block ] Got block #32017 from network application.cpp:348
2015-08-21T03:09:24 th_a:Witness Block Production block_production_loo ] slot: 1 scheduled_witness: 1.6.5 scheduled_time: 2015-08-21T03:09:25 now: 2015-08-21T03:09:25 witness.cpp:240
2015-08-21T03:09:25 th_a:Witness Block Production block_production_loo ] slot: 2 scheduled_witness: 1.6.5248 scheduled_time: 2015-08-21T03:09:26 now: 2015-08-21T03:09:26 witness.cpp:240
2015-08-21T03:09:26 th_a:Witness Block Production block_production_loo ] slot: 3 scheduled_witness: 1.6.15 scheduled_time: 2015-08-21T03:09:27 now: 2015-08-21T03:09:27 witness.cpp:240
2015-08-21T03:09:27 th_a:Witness Block Production block_production_loo ] slot: 4 scheduled_witness: 1.6.58 scheduled_time: 2015-08-21T03:09:28 now: 2015-08-21T03:09:28 witness.cpp:240
2015-08-21T03:09:28 th_a:Witness Block Production block_production_loo ] slot: 5 scheduled_witness: 1.6.17 scheduled_time: 2015-08-21T03:09:29 now: 2015-08-21T03:09:29 witness.cpp:240
2015-08-21T03:09:29 th_a:Witness Block Production block_production_loo ] slot: 6 scheduled_witness: 1.6.11 scheduled_time: 2015-08-21T03:09:30 now: 2015-08-21T03:09:30 witness.cpp:240
2015-08-21T03:09:30 th_a:Witness Block Production block_production_loo ] slot: 7 scheduled_witness: 1.6.83 scheduled_time: 2015-08-21T03:09:31 now: 2015-08-21T03:09:31 witness.cpp:240
2015-08-21T03:09:31 th_a:Witness Block Production block_production_loo ] slot: 8 scheduled_witness: 1.6.61 scheduled_time: 2015-08-21T03:09:32 now: 2015-08-21T03:09:32 witness.cpp:240
2015-08-21T03:09:32 th_a:Witness Block Production block_production_loo ] slot: 9 scheduled_witness: 1.6.69 scheduled_time: 2015-08-21T03:09:33 now: 2015-08-21T03:09:33 witness.cpp:240
2015-08-21T03:09:33 th_a:invoke handle_block handle_block ] Got block #32024 from network application.cpp:348
2015-08-21T03:09:33 th_a:invoke handle_block push_block ] Pushing block to fork database that failed to link.
2015-08-21T03:09:23 p2p:message read_loop clear_old_inventory ] Expiring old inventory for peer 45.55.6.216:1776: removing 0 items advertised to peer (111 left), and 0 advertised to us (6 left) peer_connection.cpp:479
2015-08-21T03:09:23 p2p:message read_loop on_item_ids_inventor ] received inventory of 1 items from peer 45.55.6.216:1776 node.cpp:2613
2015-08-21T03:09:24 p2p:message read_loop on_message ] handling message item_ids_inventory_message_type 9f6bdadc2780239f7460b94b263593cf5b0288a4 size 25 from peer 176.221.43.130:33323 node.cpp:1651
2015-08-21T03:09:24 p2p:message read_loop clear_old_inventory ] Expiring old inventory for peer 176.221.43.130:33323: removing 0 items advertised to peer (3 left), and 1 advertised to us (113 left) peer_connection.cpp:479
2015-08-21T03:09:24 p2p:message read_loop on_item_ids_inventor ] received inventory of 1 items from peer 176.221.43.130:33323 node.cpp:2613
2015-08-21T03:09:24 p2p:message read_loop on_item_ids_inventor ] adding item bb51709a4c2a3c64bdf260e28908c05b1f849a7b from inventory message to our list of items to fetch node.cpp:2647
2015-08-21T03:09:24 p2p:fetch_items_loop fetch_items_loop ] beginning an iteration of fetch items (1 items to fetch) node.cpp:1104
2015-08-21T03:09:24 p2p:fetch_items_loop fetch_items_loop ] requesting item bb51709a4c2a3c64bdf260e28908c05b1f849a7b from peer 176.221.43.130:33323 node.cpp:1123
2015-08-21T03:09:24 p2p:fetch_items_loop send_message ] peer_connection::send_message() enqueueing message of type 5004 for peer 176.221.43.130:33323 peer_connection.cpp:365
2015-08-21T03:09:24 p2p:fetch_items_loop send_queueable_messa ] peer_connection::send_message() is firing up send_queued_message_task peer_connection.cpp:354
2015-08-21T03:09:24 p2p:send_queued_messages_task counter ] entering peer_connection::send_queued_messages_task() peer_connection.cpp:279
2015-08-21T03:09:24 p2p:send_queued_messages_task send_queued_messages ] peer_connection::send_queued_messages_task() calling message_oriented_connection::send_message() to send message of type 5004 for peer 176.221.43.130:33323 peer_connection.cpp:291
2015-08-21T03:09:24 p2p:send_queued_messages_task send_queued_messages ] peer_connection::send_queued_messages_task()'s call to message_oriented_connection::send_message() completed normally for peer 176.221.43.130:33323 peer_connection.cpp:294
2015-08-21T03:09:24 p2p:send_queued_messages_task send_queued_messages ] leaving peer_connection::send_queued_messages_task() due to queue exhaustion peer_connection.cpp:326
2015-08-21T03:09:24 p2p:send_queued_messages_task ~counter ] leaving peer_connection::send_queued_messages_task() peer_connection.cpp:280
2015-08-21T03:09:24 p2p:message read_loop on_message ] handling message block_message_type bb51709a4c2a3c64bdf260e28908c05b1f849a7b size 172 from peer 176.221.43.130:33323 node.cpp:1651
2015-08-21T03:09:24 p2p:message read_loop process_block_during ] received a block from peer 176.221.43.130:33323, passing it to client node.cpp:3087
2015-08-21T03:09:24 p2p:message read_loop process_block_during ] Successfully pushed block 32017 (id:00007d11685f540f83aab26800d7afd880cd154d) node.cpp:3109
2015-08-21T03:09:24 p2p:message read_loop process_block_during ] client validated the block, advertising it to other peers node.cpp:3133
2015-08-21T03:09:24 p2p:message read_loop clear_old_inventory ] Expiring old inventory for peer 176.221.43.130:33323: removing 0 items advertised to peer (3 left), and 0 advertised to us (114 left) peer_connection.cpp:479
2015-08-21T03:09:24 p2p:message read_loop clear_old_inventory ] Expiring old inventory for peer 45.55.6.216:1776: removing 1 items advertised to peer (110 left), and 0 advertised to us (6 left) peer_connection.cpp:479
2015-08-21T03:09:24 p2p:advertise_inventory_loop advertise_inventory_ ] beginning an iteration of advertise inventory node.cpp:1175
2015-08-21T03:09:24 p2p:advertise_inventory_loop advertise_inventory_ ] peer->peer_needs_sync_items_from_us: false node.cpp:1188
2015-08-21T03:09:24 p2p:advertise_inventory_loop advertise_inventory_ ] inventory_to_advertise: [{"item_type":1001,"item_hash":"bb51709a4c2a3c64bdf260e28908c05b1f849a7b"}] node.cpp:1196
2015-08-21T03:09:24 p2p:advertise_inventory_loop advertise_inventory_ ] *peer->inventory_advertised_to_peer.find(item_to_advertise): {"item":{"item_type":3624261264,"item_hash":"ff7f00002b00000000000000ffffffffff7f0000"},"timestamp":"2023-11-27T10:29:02"} node.cpp:1200
2015-08-21T03:09:24 p2p:advertise_inventory_loop advertise_inventory_ ] advertising 0 new item(s) of 0 type(s) to peer 176.221.43.130:33323 node.cpp:1218
2015-08-21T03:09:24 p2p:advertise_inventory_loop clear_old_inventory ] Expiring old inventory for peer 176.221.43.130:33323: removing 0 items advertised to peer (3 left), and 0 advertised to us (114 left) peer_connection.cpp:479
2015-08-21T03:09:24 p2p:advertise_inventory_loop advertise_inventory_ ] peer->peer_needs_sync_items_from_us: false node.cpp:1188
2015-08-21T03:09:24 p2p:advertise_inventory_loop advertise_inventory_ ] inventory_to_advertise: [{"item_type":1001,"item_hash":"bb51709a4c2a3c64bdf260e28908c05b1f849a7b"}] node.cpp:1196
2015-08-21T03:09:24 p2p:advertise_inventory_loop advertise_inventory_ ] *peer->inventory_advertised_to_peer.find(item_to_advertise): {"item":{"item_type":3623881952,"item_hash":"ff7f0000000000000000000068a305d8ff7f0000"},"timestamp":"1948-09-25T19:23:04"} node.cpp:1200
2015-08-21T03:09:24 p2p:advertise_inventory_loop advertise_inventory_ ] *peer->inventory_peer_advertised_to_us.find(item_to_advertise): {"item":{"item_type":3624248128,"item_hash":"ff7f00002c00000000000000ffffffffcccccccc"},"timestamp":"2023-12-08T00:51:00"} node.cpp:1202
2015-08-21T03:09:24 p2p:advertise_inventory_loop advertise_inventory_ ] advertising item bb51709a4c2a3c64bdf260e28908c05b1f849a7b to peer 45.55.6.216:1776 node.cpp:1212
2015-08-21T03:09:24 p2p:advertise_inventory_loop advertise_inventory_ ] advertising 1 new item(s) of 1 type(s) to peer 45.55.6.216:1776 node.cpp:1218
2015-08-21T03:09:24 p2p:advertise_inventory_loop clear_old_inventory ] Expiring old inventory for peer 45.55.6.216:1776: removing 0 items advertised to peer (111 left), and 0 advertised to us (6 left) peer_connection.cpp:479
2015-08-21T03:09:24 p2p:advertise_inventory_loop send_message ] peer_connection::send_message() enqueueing message of type 5001 for peer 45.55.6.216:1776 peer_connection.cpp:365
2015-08-21T03:09:24 p2p:advertise_inventory_loop send_queueable_messa ] peer_connection::send_message() is firing up send_queued_message_task peer_connection.cpp:354
2015-08-21T03:09:24 p2p:fetch_items_loop fetch_items_loop ] beginning an iteration of fetch items (0 items to fetch) node.cpp:1104
2015-08-21T03:09:24 p2p:send_queued_messages_task counter ] entering peer_connection::send_queued_messages_task() peer_connection.cpp:279
2015-08-21T03:09:24 p2p:send_queued_messages_task send_queued_messages ] peer_connection::send_queued_messages_task() calling message_oriented_connection::send_message() to send message of type 5001 for peer 45.55.6.216:1776 peer_connection.cpp:291
2015-08-21T03:09:24 p2p:send_queued_messages_task send_queued_messages ] peer_connection::send_queued_messages_task()'s call to message_oriented_connection::send_message() completed normally for peer 45.55.6.216:1776 peer_connection.cpp:294
2015-08-21T03:09:24 p2p:send_queued_messages_task send_queued_messages ] leaving peer_connection::send_queued_messages_task() due to queue exhaustion peer_connection.cpp:326
2015-08-21T03:09:24 p2p:send_queued_messages_task ~counter ] leaving peer_connection::send_queued_messages_task() peer_connection.cpp:280
2015-08-21T03:09:24 p2p:message read_loop on_message ] handling message item_ids_inventory_message_type 9f6bdadc2780239f7460b94b263593cf5b0288a4 size 25 from peer 45.55.6.216:1776 node.cpp:1651
2015-08-21T03:09:24 p2p:message read_loop clear_old_inventory ] Expiring old inventory for peer 45.55.6.216:1776: removing 0 items advertised to peer (111 left), and 0 advertised to us (6 left) peer_connection.cpp:479
2015-08-21T03:09:24 p2p:message read_loop on_item_ids_inventor ] received inventory of 1 items from peer 45.55.6.216:1776 node.cpp:2613
2015-08-21T03:09:31 p2p:message read_loop on_message ] handling message item_ids_inventory_message_type 4db8f599f0ef6e3741c3638775acdaab5dcc63e8 size 25 from peer 176.221.43.130:33323 node.cpp:1651
2015-08-21T03:09:31 p2p:message read_loop clear_old_inventory ] Expiring old inventory for peer 176.221.43.130:33323: removing 0 items advertised to peer (3 left), and 7 advertised to us (107 left) peer_connection.cpp:479
2015-08-21T03:09:31 p2p:message read_loop on_item_ids_inventor ] received inventory of 1 items from peer 176.221.43.130:33323 node.cpp:2613
2015-08-21T03:09:31 p2p:message read_loop on_item_ids_inventor ] adding item 1d6ff20ecc62c13c5a2dbb02cfbb0a8fee17c365 from inventory message to our list of items to fetch node.cpp:2647
2015-08-21T03:09:31 p2p:fetch_items_loop fetch_items_loop ] beginning an iteration of fetch items (1 items to fetch) node.cpp:1104
2015-08-21T03:09:31 p2p:fetch_items_loop fetch_items_loop ] requesting item 1d6ff20ecc62c13c5a2dbb02cfbb0a8fee17c365 from peer 176.221.43.130:33323 node.cpp:1123
2015-08-21T03:09:31 p2p:fetch_items_loop send_message ] peer_connection::send_message() enqueueing message of type 5004 for peer 176.221.43.130:33323 peer_connection.cpp:365
2015-08-21T03:09:31 p2p:fetch_items_loop send_queueable_messa ] peer_connection::send_message() is firing up send_queued_message_task peer_connection.cpp:354
2015-08-21T03:09:31 p2p:send_queued_messages_task counter ] entering peer_connection::send_queued_messages_task() peer_connection.cpp:279
2015-08-21T03:09:31 p2p:send_queued_messages_task send_queued_messages ] peer_connection::send_queued_messages_task() calling message_oriented_connection::send_message() to send message of type 5004 for peer 176.221.43.130:33323 peer_connection.cpp:291
2015-08-21T03:09:31 p2p:send_queued_messages_task send_queued_messages ] peer_connection::send_queued_messages_task()'s call to message_oriented_connection::send_message() completed normally for peer 176.221.43.130:33323 peer_connection.cpp:294
2015-08-21T03:09:31 p2p:send_queued_messages_task send_queued_messages ] leaving peer_connection::send_queued_messages_task() due to queue exhaustion peer_connection.cpp:326
2015-08-21T03:09:31 p2p:send_queued_messages_task ~counter ] leaving peer_connection::send_queued_messages_task() peer_connection.cpp:280
2015-08-21T03:09:32 p2p:message read_loop on_message ] handling message item_ids_inventory_message_type 2dabf66b081dc63e56840276d7d655a5bb1fec58 size 25 from peer 176.221.43.130:33323 node.cpp:1651
2015-08-21T03:09:32 p2p:message read_loop clear_old_inventory ] Expiring old inventory for peer 176.221.43.130:33323: removing 0 items advertised to peer (3 left), and 1 advertised to us (107 left) peer_connection.cpp:479
2015-08-21T03:09:32 p2p:message read_loop on_item_ids_inventor ] received inventory of 1 items from peer 176.221.43.130:33323 node.cpp:2613
2015-08-21T03:09:32 p2p:message read_loop on_item_ids_inventor ] adding item 5867c371be07002a8b1b42d3336a63ceb1f7cbef from inventory message to our list of items to fetch node.cpp:2647
2015-08-21T03:09:32 p2p:fetch_items_loop fetch_items_loop ] beginning an iteration of fetch items (1 items to fetch) node.cpp:1104
2015-08-21T03:09:32 p2p:message read_loop on_message ] handling message item_ids_inventory_message_type d32fb20dbd657a2ea730340ef8db9ef3c79bc93c size 25 from peer 176.221.43.130:33323 node.cpp:1651
2015-08-21T03:09:32 p2p:message read_loop clear_old_inventory ] Expiring old inventory for peer 176.221.43.130:33323: removing 0 items advertised to peer (3 left), and 0 advertised to us (108 left) peer_connection.cpp:479
2015-08-21T03:09:32 p2p:message read_loop on_item_ids_inventor ] received inventory of 1 items from peer 176.221.43.130:33323 node.cpp:2613
2015-08-21T03:09:32 p2p:message read_loop on_item_ids_inventor ] adding item a94d298df6526af3e66da0322376b6ed2f951825 from inventory message to our list of items to fetch node.cpp:2647
2015-08-21T03:09:32 p2p:fetch_items_loop fetch_items_loop ] beginning an iteration of fetch items (2 items to fetch) node.cpp:1104
2015-08-21T03:09:32 p2p:message read_loop on_message ] handling message item_ids_inventory_message_type c3fec5988ab78731328eca88332078a7b3b15284 size 25 from peer 176.221.43.130:33323 node.cpp:1651
2015-08-21T03:09:32 p2p:message read_loop clear_old_inventory ] Expiring old inventory for peer 176.221.43.130:33323: removing 0 items advertised to peer (3 left), and 0 advertised to us (109 left) peer_connection.cpp:479
2015-08-21T03:09:32 p2p:message read_loop on_item_ids_inventor ] received inventory of 1 items from peer 176.221.43.130:33323 node.cpp:2613
2015-08-21T03:09:32 p2p:message read_loop on_item_ids_inventor ] adding item 9f6a0e098495a36f16f34c8ab857c831d1b2e29e from inventory message to our list of items to fetch node.cpp:2647
2015-08-21T03:09:32 p2p:fetch_items_loop fetch_items_loop ] beginning an iteration of fetch items (3 items to fetch) node.cpp:1104
2015-08-21T03:09:32 p2p:message read_loop on_message ] handling message item_ids_inventory_message_type a7534938c031313fd745885500d2864d0f6a359b size 25 from peer 176.221.43.130:33323 node.cpp:1651
2015-08-21T03:09:32 p2p:message read_loop clear_old_inventory ] Expiring old inventory for peer 176.221.43.130:33323: removing 0 items advertised to peer (3 left), and 0 advertised to us (110 left) peer_connection.cpp:479
2015-08-21T03:09:32 p2p:message read_loop on_item_ids_inventor ] received inventory of 1 items from peer 176.221.43.130:33323 node.cpp:2613
2015-08-21T03:09:32 p2p:message read_loop on_item_ids_inventor ] adding item 01e5235516c64efd7aab3195677de74e59a4d96c from inventory message to our list of items to fetch node.cpp:2647
2015-08-21T03:09:32 p2p:fetch_items_loop fetch_items_loop ] beginning an iteration of fetch items (4 items to fetch) node.cpp:1104
2015-08-21T03:09:32 p2p:message read_loop on_message ] handling message item_ids_inventory_message_type 91cd39de6b1f58353fbdc1ae5f0270bc3fd5865f size 25 from peer 176.221.43.130:33323 node.cpp:1651
2015-08-21T03:09:32 p2p:message read_loop clear_old_inventory ] Expiring old inventory for peer 176.221.43.130:33323: removing 0 items advertised to peer (3 left), and 0 advertised to us (111 left) peer_connection.cpp:479
2015-08-21T03:09:32 p2p:message read_loop on_item_ids_inventor ] received inventory of 1 items from peer 176.221.43.130:33323 node.cpp:2613
2015-08-21T03:09:32 p2p:message read_loop on_item_ids_inventor ] adding item 19007e79d3b857e7e4efeb2cd98c7c8977062504 from inventory message to our list of items to fetch node.cpp:2647
2015-08-21T03:09:32 p2p:fetch_items_loop fetch_items_loop ] beginning an iteration of fetch items (5 items to fetch) node.cpp:1104
2015-08-21T03:09:32 p2p:message read_loop on_message ] handling message item_ids_inventory_message_type 77292c73fc07a4802219190f28442c875b2ffd85 size 25 from peer 176.221.43.130:33323 node.cpp:1651
2015-08-21T03:09:32 p2p:message read_loop clear_old_inventory ] Expiring old inventory for peer 176.221.43.130:33323: removing 0 items advertised to peer (3 left), and 0 advertised to us (112 left) peer_connection.cpp:479
2015-08-21T03:09:32 p2p:message read_loop on_item_ids_inventor ] received inventory of 1 items from peer 176.221.43.130:33323 node.cpp:2613
2015-08-21T03:09:32 p2p:message read_loop on_item_ids_inventor ] adding item d56255b170661695aaf2604e78d023939ee918c7 from inventory message to our list of items to fetch node.cpp:2647
2015-08-21T03:09:32 p2p:fetch_items_loop fetch_items_loop ] beginning an iteration of fetch items (6 items to fetch) node.cpp:1104
2015-08-21T03:09:32 p2p:message read_loop on_message ] handling message item_not_available_message_type 03f536044511199afe37d02786d0a39fac0a83b6 size 24 from peer 176.221.43.130:33323 node.cpp:1651
2015-08-21T03:09:32 p2p:message read_loop on_item_not_availabl ] Peer doesn't have the requested item. node.cpp:2581
2015-08-21T03:09:32 p2p:fetch_items_loop fetch_items_loop ] beginning an iteration of fetch items (6 items to fetch) node.cpp:1104
2015-08-21T03:09:32 p2p:fetch_items_loop fetch_items_loop ] requesting item 5867c371be07002a8b1b42d3336a63ceb1f7cbef from peer 176.221.43.130:33323 node.cpp:1123
2015-08-21T03:09:32 p2p:fetch_items_loop send_message ] peer_connection::send_message() enqueueing message of type 5004 for peer 176.221.43.130:33323 peer_connection.cpp:365
2015-08-21T03:09:32 p2p:fetch_items_loop send_queueable_messa ] peer_connection::send_message() is firing up send_queued_message_task peer_connection.cpp:354
2015-08-21T03:09:32 p2p:send_queued_messages_task counter ] entering peer_connection::send_queued_messages_task() peer_connection.cpp:279
2015-08-21T03:09:32 p2p:send_queued_messages_task send_queued_messages ] peer_connection::send_queued_messages_task() calling message_oriented_connection::send_message() to send message of type 5004 for peer 176.221.43.130:33323 peer_connection.cpp:291
2015-08-21T03:09:32 p2p:send_queued_messages_task send_queued_messages ] peer_connection::send_queued_messages_task()'s call to message_oriented_connection::send_message() completed normally for peer 176.221.43.130:33323 peer_connection.cpp:294
2015-08-21T03:09:32 p2p:send_queued_messages_task send_queued_messages ] leaving peer_connection::send_queued_messages_task() due to queue exhaustion peer_connection.cpp:326
2015-08-21T03:09:32 p2p:send_queued_messages_task ~counter ] leaving peer_connection::send_queued_messages_task() peer_connection.cpp:280
2015-08-21T03:09:32 p2p:message read_loop on_message ] handling message item_ids_inventory_message_type bcd547d726b0f393e6aee051278cb990247141bd size 25 from peer 176.221.43.130:33323 node.cpp:1651
2015-08-21T03:09:32 p2p:message read_loop clear_old_inventory ] Expiring old inventory for peer 176.221.43.130:33323: removing 0 items advertised to peer (3 left), and 0 advertised to us (112 left) peer_connection.cpp:479
2015-08-21T03:09:32 p2p:message read_loop on_item_ids_inventor ] received inventory of 1 items from peer 176.221.43.130:33323 node.cpp:2613
2015-08-21T03:09:32 p2p:message read_loop on_item_ids_inventor ] adding item 890e4ce407f50879d21b4ecd700cc4d3e5cc58d6 from inventory message to our list of items to fetch node.cpp:2647
2015-08-21T03:09:32 p2p:fetch_items_loop fetch_items_loop ] beginning an iteration of fetch items (6 items to fetch) node.cpp:1104
2015-08-21T03:09:32 p2p:p2p_network_connect_loop p2p_network_connect_ ] Starting an iteration of p2p_network_connect_loop(). node.cpp:881
2015-08-21T03:09:32 p2p:p2p_network_connect_loop display_current_conn ] Currently have 2 of [20/200] connections node.cpp:1625
2015-08-21T03:09:32 p2p:p2p_network_connect_loop display_current_conn ] my id is 267adcc6efc0ca128ac187fd47665b93e6eac6ffbcf6a14f1a2eb5c658e3348708 node.cpp:1626
2015-08-21T03:09:32 p2p:p2p_network_connect_loop display_current_conn ] active: 176.221.43.130:33323 with 0a69dc25fca6f9e85284deda98c92a51bc0f659adfa976e017ebf61620ab32d2c1 [outbound] node.cpp:1633
2015-08-21T03:09:32 p2p:p2p_network_connect_loop display_current_conn ] active: 45.55.6.216:1776 with a7cb5e17e25898bb1bef2feb31a341c4c22bc489e700dfbe41f0933b9e145261a1 [outbound] node.cpp:1633
2015-08-21T03:09:32 p2p:p2p_network_connect_loop display_current_conn ] Currently have 2 of [20/200] connections node.cpp:1625
2015-08-21T03:09:32 p2p:p2p_network_connect_loop display_current_conn ] my id is 267adcc6efc0ca128ac187fd47665b93e6eac6ffbcf6a14f1a2eb5c658e3348708 node.cpp:1626
2015-08-21T03:09:32 p2p:p2p_network_connect_loop display_current_conn ] active: 176.221.43.130:33323 with 0a69dc25fca6f9e85284deda98c92a51bc0f659adfa976e017ebf61620ab32d2c1 [outbound] node.cpp:1633
2015-08-21T03:09:32 p2p:p2p_network_connect_loop display_current_conn ] active: 45.55.6.216:1776 with a7cb5e17e25898bb1bef2feb31a341c4c22bc489e700dfbe41f0933b9e145261a1 [outbound] node.cpp:1633
2015-08-21T03:09:32 p2p:message read_loop on_message ] handling message item_ids_inventory_message_type 4db8f599f0ef6e3741c3638775acdaab5dcc63e8 size 25 from peer 45.55.6.216:1776 node.cpp:1651
2015-08-21T03:09:32 p2p:message read_loop clear_old_inventory ] Expiring old inventory for peer 45.55.6.216:1776: removing 8 items advertised to peer (103 left), and 0 advertised to us (6 left) peer_connection.cpp:479
2015-08-21T03:09:32 p2p:message read_loop on_item_ids_inventor ] received inventory of 1 items from peer 45.55.6.216:1776 node.cpp:2613
2015-08-21T03:09:32 p2p:message read_loop on_item_ids_inventor ] adding item 1d6ff20ecc62c13c5a2dbb02cfbb0a8fee17c365 from inventory message to our list of items to fetch node.cpp:2647
2015-08-21T03:09:32 p2p:fetch_items_loop fetch_items_loop ] beginning an iteration of fetch items (7 items to fetch) node.cpp:1104
2015-08-21T03:09:32 p2p:fetch_items_loop fetch_items_loop ] requesting item 1d6ff20ecc62c13c5a2dbb02cfbb0a8fee17c365 from peer 45.55.6.216:1776 node.cpp:1123
2015-08-21T03:09:32 p2p:fetch_items_loop send_message ] peer_connection::send_message() enqueueing message of type 5004 for peer 45.55.6.216:1776 peer_connection.cpp:365
2015-08-21T03:09:32 p2p:fetch_items_loop send_queueable_messa ] peer_connection::send_message() is firing up send_queued_message_task peer_connection.cpp:354
2015-08-21T03:09:32 p2p:send_queued_messages_task counter ] entering peer_connection::send_queued_messages_task() peer_connection.cpp:279
2015-08-21T03:09:32 p2p:send_queued_messages_task send_queued_messages ] peer_connection::send_queued_messages_task() calling message_oriented_connection::send_message() to send message of type 5004 for peer 45.55.6.216:1776 peer_connection.cpp:291
2015-08-21T03:09:32 p2p:send_queued_messages_task send_queued_messages ] peer_connection::send_queued_messages_task()'s call to message_oriented_connection::send_message() completed normally for peer 45.55.6.216:1776 peer_connection.cpp:294
2015-08-21T03:09:32 p2p:send_queued_messages_task send_queued_messages ] leaving peer_connection::send_queued_messages_task() due to queue exhaustion peer_connection.cpp:326
2015-08-21T03:09:32 p2p:send_queued_messages_task ~counter ] leaving peer_connection::send_queued_messages_task() peer_connection.cpp:280
2015-08-21T03:09:32 p2p:message read_loop on_message ] handling message item_ids_inventory_message_type 2dabf66b081dc63e56840276d7d655a5bb1fec58 size 25 from peer 45.55.6.216:1776 node.cpp:1651
2015-08-21T03:09:32 p2p:message read_loop clear_old_inventory ] Expiring old inventory for peer 45.55.6.216:1776: removing 0 items advertised to peer (103 left), and 0 advertised to us (7 left) peer_connection.cpp:479
2015-08-21T03:09:32 p2p:message read_loop on_item_ids_inventor ] received inventory of 1 items from peer 45.55.6.216:1776 node.cpp:2613
2015-08-21T03:09:32 p2p:message read_loop on_message ] handling message item_ids_inventory_message_type d32fb20dbd657a2ea730340ef8db9ef3c79bc93c size 25 from peer 45.55.6.216:1776 node.cpp:1651
2015-08-21T03:09:32 p2p:message read_loop clear_old_inventory ] Expiring old inventory for peer 45.55.6.216:1776: removing 0 items advertised to peer (103 left), and 0 advertised to us (8 left) peer_connection.cpp:479
2015-08-21T03:09:32 p2p:message read_loop on_item_ids_inventor ] received inventory of 1 items from peer 45.55.6.216:1776 node.cpp:2613
2015-08-21T03:09:32 p2p:message read_loop on_message ] handling message item_ids_inventory_message_type c3fec5988ab78731328eca88332078a7b3b15284 size 25 from peer 45.55.6.216:1776 node.cpp:1651
2015-08-21T03:09:32 p2p:message read_loop clear_old_inventory ] Expiring old inventory for peer 45.55.6.216:1776: removing 0 items advertised to peer (103 left), and 0 advertised to us (9 left) peer_connection.cpp:479
2015-08-21T03:09:32 p2p:message read_loop on_item_ids_inventor ] received inventory of 1 items from peer 45.55.6.216:1776 node.cpp:2613
2015-08-21T03:09:32 p2p:message read_loop on_message ] handling message item_ids_inventory_message_type a7534938c031313fd745885500d2864d0f6a359b size 25 from peer 45.55.6.216:1776 node.cpp:1651
2015-08-21T03:09:32 p2p:message read_loop clear_old_inventory ] Expiring old inventory for peer 45.55.6.216:1776: removing 0 items advertised to peer (103 left), and 0 advertised to us (10 left) peer_connection.cpp:479
2015-08-21T03:09:32 p2p:message read_loop on_item_ids_inventor ] received inventory of 1 items from peer 45.55.6.216:1776 node.cpp:2613
2015-08-21T03:09:32 p2p:message read_loop on_message ] handling message item_ids_inventory_message_type 91cd39de6b1f58353fbdc1ae5f0270bc3fd5865f size 25 from peer 45.55.6.216:1776 node.cpp:1651
2015-08-21T03:09:32 p2p:message read_loop clear_old_inventory ] Expiring old inventory for peer 45.55.6.216:1776: removing 0 items advertised to peer (103 left), and 0 advertised to us (11 left) peer_connection.cpp:479
2015-08-21T03:09:32 p2p:message read_loop on_item_ids_inventor ] received inventory of 1 items from peer 45.55.6.216:1776 node.cpp:2613
2015-08-21T03:09:32 p2p:message read_loop on_message ] handling message item_ids_inventory_message_type 77292c73fc07a4802219190f28442c875b2ffd85 size 25 from peer 45.55.6.216:1776 node.cpp:1651
2015-08-21T03:09:32 p2p:message read_loop clear_old_inventory ] Expiring old inventory for peer 45.55.6.216:1776: removing 0 items advertised to peer (103 left), and 0 advertised to us (12 left) peer_connection.cpp:479
2015-08-21T03:09:32 p2p:message read_loop on_item_ids_inventor ] received inventory of 1 items from peer 45.55.6.216:1776 node.cpp:2613
2015-08-21T03:09:32 p2p:message read_loop on_message ] handling message item_ids_inventory_message_type bcd547d726b0f393e6aee051278cb990247141bd size 25 from peer 45.55.6.216:1776 node.cpp:1651
2015-08-21T03:09:32 p2p:message read_loop clear_old_inventory ] Expiring old inventory for peer 45.55.6.216:1776: removing 0 items advertised to peer (103 left), and 0 advertised to us (13 left) peer_connection.cpp:479
2015-08-21T03:09:32 p2p:message read_loop on_item_ids_inventor ] received inventory of 1 items from peer 45.55.6.216:1776 node.cpp:2613
2015-08-21T03:09:33 p2p:message read_loop on_message ] handling message item_not_available_message_type 03f536044511199afe37d02786d0a39fac0a83b6 size 24 from peer 45.55.6.216:1776 node.cpp:1651
what happened here2015-08-21T03:09:24 p2p:message read_loop on_item_ids_inventor ] received inventory of 1 items from peer 45.55.6.216:1776 node.cpp:2613
2015-08-21T03:09:31 p2p:message read_loop on_message ] handling message item_ids_inventory_message_type 4db8f599f0ef6e3741c3638775acdaab5dcc63e8 size 25 from peer 176.221.43.130:33323 node.cpp:1651
./witness_node -d test5 genesis-json /home/james/Downloads/aug-20-test-genesis.json --resync-blockchain --witness-id '"1.6.5250"' --private-key '["GPH7kNZtp64ZR1R4yC2w9bDLFNHkM8L2AFZSj1E", "5Jnwg...JC7Ur"]' -s "104.236.51.238:1776"
3270744ms th_a thread.cpp:95 thread ] name:p2p tid:139985686546176
3270851ms ntp ntp.cpp:81 request_now ] sending request to 198.60.22.240:123
3270851ms th_a application.cpp:117 reset_p2p_node ] Adding seed node 104.236.51.238:1776
3270853ms th_a application.cpp:129 reset_p2p_node ] Configured p2p node to listen on 0.0.0.0:59551
3270854ms th_a witness.cpp:107 plugin_startup ] WARNING: Unable to find witness 1.6.5250. Postponing initialization until syncing finishes.
3270854ms th_a main.cpp:165 main ] Started witness node on a chain with 0 blocks.
3270854ms th_a main.cpp:166 main ] Chain ID is b6ff5d956ca601b3682edbc52c5678819a63dd80f5c0fad00a289d1a09c4662e
3270928ms ntp ntp.cpp:147 read_loop ] received ntp reply from 198.60.22.240:123
3270928ms ntp ntp.cpp:161 read_loop ] ntp offset: -7351300, round_trip_delay 77122
3270929ms ntp ntp.cpp:177 read_loop ] ntp_delta_time updated to -7351300
Did the chain ID change or am I on a fork?
test5 is a brand new data directory.Code: [Select]./witness_node -d test5 genesis-json /home/james/Downloads/aug-20-test-genesis.json --resync-blockchain --witness-id '"1.6.5250"' --private-key '["GPH7kNZtp64ZR1R4yC2w9bDLFNHkM8L2AFZSj1E", "5Jnwg...JC7Ur"]' -s "104.236.51.238:1776"
Code: [Select]3270744ms th_a thread.cpp:95 thread ] name:p2p tid:139985686546176
3270851ms ntp ntp.cpp:81 request_now ] sending request to 198.60.22.240:123
3270851ms th_a application.cpp:117 reset_p2p_node ] Adding seed node 104.236.51.238:1776
3270853ms th_a application.cpp:129 reset_p2p_node ] Configured p2p node to listen on 0.0.0.0:59551
3270854ms th_a witness.cpp:107 plugin_startup ] WARNING: Unable to find witness 1.6.5250. Postponing initialization until syncing finishes.
3270854ms th_a main.cpp:165 main ] Started witness node on a chain with 0 blocks.
3270854ms th_a main.cpp:166 main ] Chain ID is b6ff5d956ca601b3682edbc52c5678819a63dd80f5c0fad00a289d1a09c4662e
3270928ms ntp ntp.cpp:147 read_loop ] received ntp reply from 198.60.22.240:123
3270928ms ntp ntp.cpp:161 read_loop ] ntp offset: -7351300, round_trip_delay 77122
3270929ms ntp ntp.cpp:177 read_loop ] ntp_delta_time updated to -7351300
Code: [Select]./witness_node -d test5 genesis-json /home/james/Downloads/aug-20-test-genesis.json --resync-blockchain --witness-id '"1.6.5250"' --private-key '["GPH7kNZtp64ZR1R4yC2w9bDLFNHkM8L2AFZSj1E", "5Jnwg...JC7Ur"]' -s "104.236.51.238:1776"
./witness_node -d test5 --genesis-json /home/james/Downloads/aug-20-test-genesis.json --resync-blockchain --witness-id '"1.6.5250"' --private-key '["GPH7kNZtp64ZR1R4yC2w9bDLFNHkM8L2AFZSj1E", "5Jnwg...JC7Ur"]' -s "104.236.51.238:1776"
./witness_node -d test5 --genesis-json ...
Did the chain ID change or am I on a fork?
test5 is a brand new data directory.Code: [Select]./witness_node -d test5 genesis-json /home/james/Downloads/aug-20-test-genesis.json --resync-blockchain --witness-id '"1.6.5250"' --private-key '["GPH7kNZtp64ZR1R4yC2w9bDLFNHkM8L2AFZSj1E", "5Jnwg...JC7Ur"]' -s "104.236.51.238:1776"
General question (since I could not find this elsewhere):I don't know if there is a download link for the blockchain.
Is there a genuine place from where I can download "an up-to-date" BitShares blockchain? Setting up a new client is easy, but syncing it takes long and sometimes does not work such that the whole process needs to be repeated again and again (my slow little pc, c847/8gb, often has problems here and it takes days to re-sync/re-download the chain)... As such, it would be awesome to have an official and always "up-to-date" download link for the BitShares blockchain.
That said, my question is if such a download link is already there (and I could not find it), or would it be a great addition to the BitShare's homepage?
EDIT: Also 8gb ram seem sometimes not enough for a re-sync (using v0.9.2)... is that true? Did some one else make similar experiences?
8GB RAM is more than enough. If you're using Windows or Linux, try run 'bitshares_client' rather than GUI for initial syncing. After in-sync you can press ctrl+d to close it then open GUI.
Wish you good luck.
get_witness everydaycrypto
{
"id": "1.6.5251",
"witness_account": "1.2.27645",
"vote_id": "1:5468",
"url": "https://everydaycrypto.com"
....
get_dynamic_global_properties
{
"id": "2.1.0",
"random": "4fc3d0459cf6eb73a00a0a9f9128b83a46a7dfc3",
"head_block_number": 42483,
...
cd
mkdir test1
cd test1
wget https://github.com/cryptonomex/graphene/releases/download/test1/aug-20-test-genesis.json
cd ..
sudo docker run --net=host -it -d -v ~/test1:/test1 sile16/graphene-witness:test1 -d test1 --genesis-json /test1/aug-20-test-genesis.json -s "176.221.43.1:33323" --rpc-endpoint "0.0.0.0:8090"
sudo docker run --net=host -it --rm sile16/graphene-cli:test1 --chain-id d011922587473757011118587f93afcc314fbaea094fc1055574721b27975083
###In Graphene CLI
>>> set_password <pass>
>>> unlock <pass>
>>> import_key everydaycrypto <owner wif key>
>>> import_balance everydaycrypto [<balance wif key>] true
>>> upgrade_account everydaycrypto true
>>> create_witness everydaycrypto "https://everydaycrypto.com" true
>>> vote_for_witness everydaycrypto everydaycrypto true true
>>> dump_private_keys
#Kill witness the nice way via a control C, docker kill container ID would work too but blows away graphene DB
#get container IDs
docker ps
docker attach aefe50505f89
^C
#just docker kill the cli container
docker kill 3a8cbffabc5d
nano test1/config.ini #set witness-id and private-keys items
sudo docker run --net=host -it -d -v ~/test1:/test1 sile16/graphene-witness:test1 -d test1 --genesis-json /test1/aug-20-test-genesis.json -s "104.236.51.238:1776" --rpc-endpoint "0.0.0.0:8090" -s "45.55.6.216:1776"
I found some problems related to spam transactions.
1) In a normal node without block production, when spam transactions occurred (in my case 100 txs in few seconds), block sync stops for about 3 minutes.
2) In a witness node with block production, spam transactions caused witness node crash. In this case, I have to resync my witness node.
3) Only part of spam transactions are accepted. I sent a hundred transactions but only 9 out of 100 are shown.
2015-08-21T08:22:23 p2p:send_queued_messages_task send_queued_messages ] Error sending message: {"code":0,"name":"exception","message":"unspecified","stack":[{"context":{"level":"error","file":"asio.cpp","line":37,"method":"operator()","hostname":"","thread_name":"asio","timestamp":"2015-08-21T08:22:23"},"format":"${message} ","data":{"message":"Bad file descriptor"}},{"context":{"level":"warn","file":"stcp_socket.cpp","line":153,"method":"writesome","hostname":"","thread_name":"p2p","timestamp":"2015-08-21T08:22:23"},"format":"","data":{"len":16}},{"context":{"level":"warn","file":"message_oriented_connection.cpp","line":264,"method":"send_message","hostname":"","thread_name":"p2p","timestamp":"2015-08-21T08:22:23"},"format":"unable to send message","data":{}}]}. Closing connection. peer_connection.cpp:303
2015-08-21T08:26:38 p2p:connect_to_task connect_to ] fatal: error connecting to peer 104.156.226.183:58884: 0 exception: unspecified
asio asio.cpp:59 error_handler peer_connection.cpp:255
2015-08-21T08:26:38 p2p:delayed_peer_deletion_task destroy ] Unexpected exception from peer_connection's accept_or_connect_task : {"code":0,"name":"exception","message":"unspecified","stack":[{"context":{"level":"error","file":"asio.cpp","line":59,"method":"error_handler","hostname":"","thread_name":"asio","timestamp":"2015-08-21T08:26:38"},"format":"${message} ","data":{"message":"Operation canceled"}}]} peer_connection.cpp:158
2015-08-21T08:26:42 p2p:connect_to_task connect_to ] fatal: error connecting to peer 104.236.255.53:36548: 0 exception: unspecified
asio asio.cpp:59 error_handler peer_connection.cpp:255
2015-08-21T08:26:42 p2p:delayed_peer_deletion_task destroy ] Unexpected exception from peer_connection's accept_or_connect_task : {"code":0,"name":"exception","message":"unspecified","stack":[{"context":{"level":"error","file":"asio.cpp","line":59,"method":"error_handler","hostname":"","thread_name":"asio","timestamp":"2015-08-21T08:26:42"},"format":"${message} ","data":{"message":"Connection refused"}}]} peer_connection.cpp:158
2015-08-21T08:26:42 p2p:connect_to_task connect_to ] fatal: error connecting to peer 104.236.51.238:1776: 0 exception: unspecified
asio asio.cpp:59 error_handler peer_connection.cpp:255
2015-08-21T08:26:42 p2p:delayed_peer_deletion_task destroy ] Unexpected exception from peer_connection's accept_or_connect_task : {"code":0,"name":"exception","message":"unspecified","stack":[{"context":{"level":"error","file":"asio.cpp","line":59,"method":"error_handler","hostname":"","thread_name":"asio","timestamp":"2015-08-21T08:26:42"},"format":"${message} ","data":{"message":"Connection refused"}}]} peer_connection.cpp:158
2015-08-21T08:26:42 p2p:connect_to_task connect_to ] fatal: error connecting to peer 104.131.205.149:44815: 0 exception: unspecified
asio asio.cpp:59 error_handler peer_connection.cpp:255
2015-08-21T08:26:42 p2p:delayed_peer_deletion_task destroy ] Unexpected exception from peer_connection's accept_or_connect_task : {"code":0,"name":"exception","message":"unspecified","stack":[{"context":{"level":"error","file":"asio.cpp","line":59,"method":"error_handler","hostname":"","thread_name":"asio","timestamp":"2015-08-21T08:26:42"},"format":"${message} ","data":{"message":"Connection refused"}}]} peer_connection.cpp:158
Hey Everyone,Hi, is the chain still alive? I'm stuck at about a hour ago and unable to connect right now.
Just got back "Online". 10 AM in the morning. Seems most of you are in the western hemisphere....
However, I also don't see original seed-node "104.236.51.238:1776" anymore (but I think it already left when i went to bed), my node on port 33323 doesn't seem to have major issues (at least not like the last times) and is now running ~10h nonstop. Here's some error grepped from my p2p log rigth now:Code: [Select]2015-08-21T08:22:23 p2p:send_queued_messages_task send_queued_messages ] Error sending message: {"code":0,"name":"exception","message":"unspecified","stack":[{"context":{"level":"error","file":"asio.cpp","line":37,"method":"operator()","hostname":"","thread_name":"asio","timestamp":"2015-08-21T08:22:23"},"format":"${message} ","data":{"message":"Bad file descriptor"}},{"context":{"level":"warn","file":"stcp_socket.cpp","line":153,"method":"writesome","hostname":"","thread_name":"p2p","timestamp":"2015-08-21T08:22:23"},"format":"","data":{"len":16}},{"context":{"level":"warn","file":"message_oriented_connection.cpp","line":264,"method":"send_message","hostname":"","thread_name":"p2p","timestamp":"2015-08-21T08:22:23"},"format":"unable to send message","data":{}}]}. Closing connection. peer_connection.cpp:303
and thenCode: [Select]2015-08-21T08:26:38 p2p:connect_to_task connect_to ] fatal: error connecting to peer 104.156.226.183:58884: 0 exception: unspecified
asio asio.cpp:59 error_handler peer_connection.cpp:255
2015-08-21T08:26:38 p2p:delayed_peer_deletion_task destroy ] Unexpected exception from peer_connection's accept_or_connect_task : {"code":0,"name":"exception","message":"unspecified","stack":[{"context":{"level":"error","file":"asio.cpp","line":59,"method":"error_handler","hostname":"","thread_name":"asio","timestamp":"2015-08-21T08:26:38"},"format":"${message} ","data":{"message":"Operation canceled"}}]} peer_connection.cpp:158
2015-08-21T08:26:42 p2p:connect_to_task connect_to ] fatal: error connecting to peer 104.236.255.53:36548: 0 exception: unspecified
asio asio.cpp:59 error_handler peer_connection.cpp:255
2015-08-21T08:26:42 p2p:delayed_peer_deletion_task destroy ] Unexpected exception from peer_connection's accept_or_connect_task : {"code":0,"name":"exception","message":"unspecified","stack":[{"context":{"level":"error","file":"asio.cpp","line":59,"method":"error_handler","hostname":"","thread_name":"asio","timestamp":"2015-08-21T08:26:42"},"format":"${message} ","data":{"message":"Connection refused"}}]} peer_connection.cpp:158
2015-08-21T08:26:42 p2p:connect_to_task connect_to ] fatal: error connecting to peer 104.236.51.238:1776: 0 exception: unspecified
asio asio.cpp:59 error_handler peer_connection.cpp:255
2015-08-21T08:26:42 p2p:delayed_peer_deletion_task destroy ] Unexpected exception from peer_connection's accept_or_connect_task : {"code":0,"name":"exception","message":"unspecified","stack":[{"context":{"level":"error","file":"asio.cpp","line":59,"method":"error_handler","hostname":"","thread_name":"asio","timestamp":"2015-08-21T08:26:42"},"format":"${message} ","data":{"message":"Connection refused"}}]} peer_connection.cpp:158
2015-08-21T08:26:42 p2p:connect_to_task connect_to ] fatal: error connecting to peer 104.131.205.149:44815: 0 exception: unspecified
asio asio.cpp:59 error_handler peer_connection.cpp:255
2015-08-21T08:26:42 p2p:delayed_peer_deletion_task destroy ] Unexpected exception from peer_connection's accept_or_connect_task : {"code":0,"name":"exception","message":"unspecified","stack":[{"context":{"level":"error","file":"asio.cpp","line":59,"method":"error_handler","hostname":"","thread_name":"asio","timestamp":"2015-08-21T08:26:42"},"format":"${message} ","data":{"message":"Connection refused"}}]} peer_connection.cpp:158So it seems it is complaining about some peer-nodes not online anymore...
To be more precise these nodes seem to online and offline. I can see them in netstat but they seem to have changed the port because they have been restarted without a fixed p2p-prt in the config? Maybe it's a good idea to fix that port or not? Is the networking part supoosed to sort this out or is it even for a purpose that the ports change everytime you restart your node?
My node crashed while generating a block. >:(
No interesting info found in logs.
Will launch in gdb this time.
Hey Everyone,Hi, is the chain still alive? I'm stuck at about a hour ago and unable to connect right now.
Just got back "Online". 10 AM in the morning. Seems most of you are in the western hemisphere....
However, I also don't see original seed-node "104.236.51.238:1776" anymore (but I think it already left when i went to bed), my node on port 33323 doesn't seem to have major issues (at least not like the last times) and is now running ~10h nonstop. Here's some error grepped from my p2p log rigth now:Code: [Select]2015-08-21T08:22:23 p2p:send_queued_messages_task send_queued_messages ] Error sending message: {"code":0,"name":"exception","message":"unspecified","stack":[{"context":{"level":"error","file":"asio.cpp","line":37,"method":"operator()","hostname":"","thread_name":"asio","timestamp":"2015-08-21T08:22:23"},"format":"${message} ","data":{"message":"Bad file descriptor"}},{"context":{"level":"warn","file":"stcp_socket.cpp","line":153,"method":"writesome","hostname":"","thread_name":"p2p","timestamp":"2015-08-21T08:22:23"},"format":"","data":{"len":16}},{"context":{"level":"warn","file":"message_oriented_connection.cpp","line":264,"method":"send_message","hostname":"","thread_name":"p2p","timestamp":"2015-08-21T08:22:23"},"format":"unable to send message","data":{}}]}. Closing connection. peer_connection.cpp:303
and thenCode: [Select]2015-08-21T08:26:38 p2p:connect_to_task connect_to ] fatal: error connecting to peer 104.156.226.183:58884: 0 exception: unspecified
asio asio.cpp:59 error_handler peer_connection.cpp:255
2015-08-21T08:26:38 p2p:delayed_peer_deletion_task destroy ] Unexpected exception from peer_connection's accept_or_connect_task : {"code":0,"name":"exception","message":"unspecified","stack":[{"context":{"level":"error","file":"asio.cpp","line":59,"method":"error_handler","hostname":"","thread_name":"asio","timestamp":"2015-08-21T08:26:38"},"format":"${message} ","data":{"message":"Operation canceled"}}]} peer_connection.cpp:158
2015-08-21T08:26:42 p2p:connect_to_task connect_to ] fatal: error connecting to peer 104.236.255.53:36548: 0 exception: unspecified
asio asio.cpp:59 error_handler peer_connection.cpp:255
2015-08-21T08:26:42 p2p:delayed_peer_deletion_task destroy ] Unexpected exception from peer_connection's accept_or_connect_task : {"code":0,"name":"exception","message":"unspecified","stack":[{"context":{"level":"error","file":"asio.cpp","line":59,"method":"error_handler","hostname":"","thread_name":"asio","timestamp":"2015-08-21T08:26:42"},"format":"${message} ","data":{"message":"Connection refused"}}]} peer_connection.cpp:158
2015-08-21T08:26:42 p2p:connect_to_task connect_to ] fatal: error connecting to peer 104.236.51.238:1776: 0 exception: unspecified
asio asio.cpp:59 error_handler peer_connection.cpp:255
2015-08-21T08:26:42 p2p:delayed_peer_deletion_task destroy ] Unexpected exception from peer_connection's accept_or_connect_task : {"code":0,"name":"exception","message":"unspecified","stack":[{"context":{"level":"error","file":"asio.cpp","line":59,"method":"error_handler","hostname":"","thread_name":"asio","timestamp":"2015-08-21T08:26:42"},"format":"${message} ","data":{"message":"Connection refused"}}]} peer_connection.cpp:158
2015-08-21T08:26:42 p2p:connect_to_task connect_to ] fatal: error connecting to peer 104.131.205.149:44815: 0 exception: unspecified
asio asio.cpp:59 error_handler peer_connection.cpp:255
2015-08-21T08:26:42 p2p:delayed_peer_deletion_task destroy ] Unexpected exception from peer_connection's accept_or_connect_task : {"code":0,"name":"exception","message":"unspecified","stack":[{"context":{"level":"error","file":"asio.cpp","line":59,"method":"error_handler","hostname":"","thread_name":"asio","timestamp":"2015-08-21T08:26:42"},"format":"${message} ","data":{"message":"Connection refused"}}]} peer_connection.cpp:158So it seems it is complaining about some peer-nodes not online anymore...
To be more precise these nodes seem to online and offline. I can see them in netstat but they seem to have changed the port because they have been restarted without a fixed p2p-prt in the config? Maybe it's a good idea to fix that port or not? Is the networking part supoosed to sort this out or is it even for a purpose that the ports change everytime you restart your node?
Mine is 114.92.254.159:62015 or other ports.
Yes it's alive nonstop. Which is your node? I see some coming and going in netstat, maybe your's is somehow blacklisted or so...
Try 176.221.43.130:33323 as seed... if not already doing so. Will have a look if I see reference to your ip somewhere..Mine is 114.92.254.159:62015 or other ports.
Yes it's alive nonstop. Which is your node? I see some coming and going in netstat, maybe your's is somehow blacklisted or so...
Yes I think I have.My node crashed while generating a block. >:(
No interesting info found in logs.
Will launch in gdb this time.
Did you have
-DCMAKE_BUILD_TYPE=Debug
while building? Mine is commenting on everything quite detailed about everything in the console....
cmake -DBOOST_ROOT="/app/boost_1_57_0.bin" -DCMAKE_BUILD_TYPE=Debug .
But the 'crash' last time looks like a clean exit. No info at all.
Already set it as seed. Maybe it is just because of slow/unstable connection? The node is in China.Try 176.221.43.130:33323 as seed... if not already doing so. Will have a look if I see reference to your ip somewhere..Mine is 114.92.254.159:62015 or other ports.
Yes it's alive nonstop. Which is your node? I see some coming and going in netstat, maybe your's is somehow blacklisted or so...
Try 176.221.43.130:33323 as seed... if not already doing so. Will have a look if I see reference to your ip somewhere..Mine is 114.92.254.159:62015 or other ports.
Yes it's alive nonstop. Which is your node? I see some coming and going in netstat, maybe your's is somehow blacklisted or so...
2015-08-21T08:46:31 p2p:message read_loop forward_firewall_che ] forwarding firewall check for node 114.92.254.159:63542 to peer 176.9.234.167:58896 node.cpp:3301
2015-08-21T08:46:32 p2p:message read_loop on_check_firewall_re ] Peer 176.9.234.167:58896 reports firewall check status unable_to_connect for 114.92.254.159:63542 node.cpp:3394
2015-08-21T08:46:32 p2p:message read_loop send_message ] peer_connection::send_message() enqueueing message of type 5015 for peer 114.92.254.159:63542 peer_connection.cpp:365
2015-08-21T08:46:32 p2p:message read_loop clear_old_inventory ] Expiring old inventory for peer 114.92.254.159:63542: removing 0 items advertised to peer (0 left), and 0 advertised to us (0 left) peer_connection.cpp:479
2015-08-21T08:46:32 p2p:advertise_inventory_loop clear_old_inventory ] Expiring old inventory for peer 114.92.254.159:63542: removing 0 items advertised to peer (0 left), and 0 advertised to us (0 left) peer_connection.cpp:479
2015-08-21T08:46:32 p2p:p2p_network_connect_loop display_current_conn ] active: 114.92.254.159:63542 with 9732257b913c2a994ae08ee742ec20c25848a8cb9e88c2aa963fc7ee8e2b40fef9 [inbound] node.cpp:1633
2015-08-21T08:46:32 p2p:p2p_network_connect_loop display_current_conn ] active: 114.92.254.159:63542 with 9732257b913c2a994ae08ee742ec20c25848a8cb9e88c2aa963fc7ee8e2b40fef9 [inbound] node.cpp:1633
2015-08-21T08:46:32 p2p:message read_loop display_current_conn ] active: 114.92.254.159:63542 with 9732257b913c2a994ae08ee742ec20c25848a8cb9e88c2aa963fc7ee8e2b40fef9 [inbound] node.cpp:1633
2015-08-21T08:46:33 p2p:message read_loop display_current_conn ] active: 114.92.254.159:63542 with 9732257b913c2a994ae08ee742ec20c25848a8cb9e88c2aa963fc7ee8e2b40fef9 [inbound] node.cpp:1633
2015-08-21T08:46:33 p2p:message read_loop display_current_conn ] active: 114.92.254.159:63542 with 9732257b913c2a994ae08ee742ec20c25848a8cb9e88c2aa963fc7ee8e2b40fef9 [inbound] node.cpp:1633
2015-08-21T08:46:33 p2p:message read_loop display_current_conn ] active: 114.92.254.159:63542 with 9732257b913c2a994ae08ee742ec20c25848a8cb9e88c2aa963fc7ee8e2b40fef9 [inbound] node.cpp:1633
2015-08-21T08:46:33 p2p:message read_loop display_current_conn ] active: 114.92.254.159:63542 with 9732257b913c2a994ae08ee742ec20c25848a8cb9e88c2aa963fc7ee8e2b40fef9 [inbound] node.cpp:1633
2015-08-21T08:46:33 p2p:message read_loop display_current_conn ] active: 114.92.254.159:63542 with 9732257b913c2a994ae08ee742ec20c25848a8cb9e88c2aa963fc7ee8e2b40fef9 [inbound] node.cpp:1633
2015-08-21T08:46:33 p2p:message read_loop clear_old_inventory ] Expiring old inventory for peer 114.92.254.159:63542: removing 0 items advertised to peer (0 left), and 0 advertised to us (0 left) peer_connection.cpp:479
2015-08-21T08:46:33 p2p:advertise_inventory_loop clear_old_inventory ] Expiring old inventory for peer 114.92.254.159:63542: removing 0 items advertised to peer (0 left), and 0 advertised to us (0 left) peer_connection.cpp:479
2015-08-21T08:46:33 p2p:message read_loop display_current_conn ] active: 114.92.254.159:63542 with 9732257b913c2a994ae08ee742ec20c25848a8cb9e88c2aa963fc7ee8e2b40fef9 [inbound] node.cpp:1633
2015-08-21T08:46:34 p2p:message read_loop clear_old_inventory ] Expiring old inventory for peer 114.92.254.159:63542: removing 0 items advertised to peer (0 left), and 0 advertised to us (0 left) peer_connection.cpp:479
2015-08-21T08:46:34 p2p:advertise_inventory_loop clear_old_inventory ] Expiring old inventory for peer 114.92.254.159:63542: removing 0 items advertised to peer (0 left), and 0 advertised to us (0 left) peer_connection.cpp:479
2015-08-21T08:46:36 p2p:message read_loop clear_old_inventory ] Expiring old inventory for peer 114.92.254.159:63542: removing 0 items advertised to peer (0 left), and 0 advertised to us (0 left) peer_connection.cpp:479
2015-08-21T08:46:36 p2p:advertise_inventory_loop clear_old_inventory ] Expiring old inventory for peer 114.92.254.159:63542: removing 0 items advertised to peer (0 left), and 0 advertised to us (0 left) peer_connection.cpp:479
2015-08-21T08:46:37 p2p:message read_loop clear_old_inventory ] Expiring old inventory for peer 114.92.254.159:63542: removing 0 items advertised to peer (0 left), and 0 advertised to us (0 left) peer_connection.cpp:479
2015-08-21T08:46:37 p2p:advertise_inventory_loop clear_old_inventory ] Expiring old inventory for peer 114.92.254.159:63542: removing 0 items advertised to peer (0 left), and 0 advertised to us (0 left) peer_connection.cpp:479
2015-08-21T08:46:38 p2p:message read_loop clear_old_inventory ] Expiring old inventory for peer 114.92.254.159:63542: removing 0 items advertised to peer (0 left), and 0 advertised to us (0 left) peer_connection.cpp:479
2015-08-21T08:46:38 p2p:advertise_inventory_loop clear_old_inventory ] Expiring old inventory for peer 114.92.254.159:63542: removing 0 items advertised to peer (0 left), and 0 advertised to us (0 left) peer_connection.cpp:479
2015-08-21T08:46:39 p2p:message read_loop clear_old_inventory ] Expiring old inventory for peer 114.92.254.159:63542: removing 0 items advertised to peer (0 left), and 0 advertised to us (0 left) peer_connection.cpp:479
2015-08-21T08:46:39 p2p:advertise_inventory_loop clear_old_inventory ] Expiring old inventory for peer 114.92.254.159:63542: removing 0 items advertised to peer (0 left), and 0 advertised to us (0 left) peer_connection.cpp:479
2015-08-21T08:46:40 p2p:message read_loop clear_old_inventory ] Expiring old inventory for peer 114.92.254.159:63542: removing 0 items advertised to peer (0 left), and 0 advertised to us (0 left) peer_connection.cpp:479
2015-08-21T08:46:40 p2p:advertise_inventory_loop clear_old_inventory ] Expiring old inventory for peer 114.92.254.159:63542: removing 0 items advertised to peer (0 left), and 0 advertised to us (0 left) peer_connection.cpp:479
2015-08-21T08:46:41 p2p:message read_loop on_connection_closed ] Remote peer 114.92.254.159:63542 closed their connection to us node.cpp:2724
2015-08-21T08:46:41 p2p:message read_loop schedule_peer_for_de ] scheduling peer for deletion: 114.92.254.159:63542 (this will not block)
I have two nodes running, both have fixed p2p port, one is 62015 and the other is 60002.Try 176.221.43.130:33323 as seed... if not already doing so. Will have a look if I see reference to your ip somewhere..Mine is 114.92.254.159:62015 or other ports.
Yes it's alive nonstop. Which is your node? I see some coming and going in netstat, maybe your's is somehow blacklisted or so...
I see things like this in reference to your ip:Code: [Select]2015-08-21T08:46:31 p2p:message read_loop forward_firewall_che ] forwarding firewall check for node 114.92.254.159:63542 to peer 176.9.234.167:58896 node.cpp:3301
2015-08-21T08:46:32 p2p:message read_loop on_check_firewall_re ] Peer 176.9.234.167:58896 reports firewall check status unable_to_connect for 114.92.254.159:63542 node.cpp:3394
2015-08-21T08:46:32 p2p:message read_loop send_message ] peer_connection::send_message() enqueueing message of type 5015 for peer 114.92.254.159:63542 peer_connection.cpp:365
2015-08-21T08:46:32 p2p:message read_loop clear_old_inventory ] Expiring old inventory for peer 114.92.254.159:63542: removing 0 items advertised to peer (0 left), and 0 advertised to us (0 left) peer_connection.cpp:479
2015-08-21T08:46:32 p2p:advertise_inventory_loop clear_old_inventory ] Expiring old inventory for peer 114.92.254.159:63542: removing 0 items advertised to peer (0 left), and 0 advertised to us (0 left) peer_connection.cpp:479
2015-08-21T08:46:32 p2p:p2p_network_connect_loop display_current_conn ] active: 114.92.254.159:63542 with 9732257b913c2a994ae08ee742ec20c25848a8cb9e88c2aa963fc7ee8e2b40fef9 [inbound] node.cpp:1633
2015-08-21T08:46:32 p2p:p2p_network_connect_loop display_current_conn ] active: 114.92.254.159:63542 with 9732257b913c2a994ae08ee742ec20c25848a8cb9e88c2aa963fc7ee8e2b40fef9 [inbound] node.cpp:1633
2015-08-21T08:46:32 p2p:message read_loop display_current_conn ] active: 114.92.254.159:63542 with 9732257b913c2a994ae08ee742ec20c25848a8cb9e88c2aa963fc7ee8e2b40fef9 [inbound] node.cpp:1633
2015-08-21T08:46:33 p2p:message read_loop display_current_conn ] active: 114.92.254.159:63542 with 9732257b913c2a994ae08ee742ec20c25848a8cb9e88c2aa963fc7ee8e2b40fef9 [inbound] node.cpp:1633
2015-08-21T08:46:33 p2p:message read_loop display_current_conn ] active: 114.92.254.159:63542 with 9732257b913c2a994ae08ee742ec20c25848a8cb9e88c2aa963fc7ee8e2b40fef9 [inbound] node.cpp:1633
2015-08-21T08:46:33 p2p:message read_loop display_current_conn ] active: 114.92.254.159:63542 with 9732257b913c2a994ae08ee742ec20c25848a8cb9e88c2aa963fc7ee8e2b40fef9 [inbound] node.cpp:1633
2015-08-21T08:46:33 p2p:message read_loop display_current_conn ] active: 114.92.254.159:63542 with 9732257b913c2a994ae08ee742ec20c25848a8cb9e88c2aa963fc7ee8e2b40fef9 [inbound] node.cpp:1633
2015-08-21T08:46:33 p2p:message read_loop display_current_conn ] active: 114.92.254.159:63542 with 9732257b913c2a994ae08ee742ec20c25848a8cb9e88c2aa963fc7ee8e2b40fef9 [inbound] node.cpp:1633
2015-08-21T08:46:33 p2p:message read_loop clear_old_inventory ] Expiring old inventory for peer 114.92.254.159:63542: removing 0 items advertised to peer (0 left), and 0 advertised to us (0 left) peer_connection.cpp:479
2015-08-21T08:46:33 p2p:advertise_inventory_loop clear_old_inventory ] Expiring old inventory for peer 114.92.254.159:63542: removing 0 items advertised to peer (0 left), and 0 advertised to us (0 left) peer_connection.cpp:479
2015-08-21T08:46:33 p2p:message read_loop display_current_conn ] active: 114.92.254.159:63542 with 9732257b913c2a994ae08ee742ec20c25848a8cb9e88c2aa963fc7ee8e2b40fef9 [inbound] node.cpp:1633
2015-08-21T08:46:34 p2p:message read_loop clear_old_inventory ] Expiring old inventory for peer 114.92.254.159:63542: removing 0 items advertised to peer (0 left), and 0 advertised to us (0 left) peer_connection.cpp:479
2015-08-21T08:46:34 p2p:advertise_inventory_loop clear_old_inventory ] Expiring old inventory for peer 114.92.254.159:63542: removing 0 items advertised to peer (0 left), and 0 advertised to us (0 left) peer_connection.cpp:479
2015-08-21T08:46:36 p2p:message read_loop clear_old_inventory ] Expiring old inventory for peer 114.92.254.159:63542: removing 0 items advertised to peer (0 left), and 0 advertised to us (0 left) peer_connection.cpp:479
2015-08-21T08:46:36 p2p:advertise_inventory_loop clear_old_inventory ] Expiring old inventory for peer 114.92.254.159:63542: removing 0 items advertised to peer (0 left), and 0 advertised to us (0 left) peer_connection.cpp:479
2015-08-21T08:46:37 p2p:message read_loop clear_old_inventory ] Expiring old inventory for peer 114.92.254.159:63542: removing 0 items advertised to peer (0 left), and 0 advertised to us (0 left) peer_connection.cpp:479
2015-08-21T08:46:37 p2p:advertise_inventory_loop clear_old_inventory ] Expiring old inventory for peer 114.92.254.159:63542: removing 0 items advertised to peer (0 left), and 0 advertised to us (0 left) peer_connection.cpp:479
2015-08-21T08:46:38 p2p:message read_loop clear_old_inventory ] Expiring old inventory for peer 114.92.254.159:63542: removing 0 items advertised to peer (0 left), and 0 advertised to us (0 left) peer_connection.cpp:479
2015-08-21T08:46:38 p2p:advertise_inventory_loop clear_old_inventory ] Expiring old inventory for peer 114.92.254.159:63542: removing 0 items advertised to peer (0 left), and 0 advertised to us (0 left) peer_connection.cpp:479
2015-08-21T08:46:39 p2p:message read_loop clear_old_inventory ] Expiring old inventory for peer 114.92.254.159:63542: removing 0 items advertised to peer (0 left), and 0 advertised to us (0 left) peer_connection.cpp:479
2015-08-21T08:46:39 p2p:advertise_inventory_loop clear_old_inventory ] Expiring old inventory for peer 114.92.254.159:63542: removing 0 items advertised to peer (0 left), and 0 advertised to us (0 left) peer_connection.cpp:479
2015-08-21T08:46:40 p2p:message read_loop clear_old_inventory ] Expiring old inventory for peer 114.92.254.159:63542: removing 0 items advertised to peer (0 left), and 0 advertised to us (0 left) peer_connection.cpp:479
2015-08-21T08:46:40 p2p:advertise_inventory_loop clear_old_inventory ] Expiring old inventory for peer 114.92.254.159:63542: removing 0 items advertised to peer (0 left), and 0 advertised to us (0 left) peer_connection.cpp:479
2015-08-21T08:46:41 p2p:message read_loop on_connection_closed ] Remote peer 114.92.254.159:63542 closed their connection to us node.cpp:2724
2015-08-21T08:46:41 p2p:message read_loop schedule_peer_for_de ] scheduling peer for deletion: 114.92.254.159:63542 (this will not block)
Those line with firewall sounds like there is some potentail for being blacklisted. ON the other hand it clearly states you are not blocked.
Do you know it is on purpose that the port changes everytime one fires up the node? Have you already tried moving to fixed port?
2015-08-21T09:07:54 p2p:p2p_network_connect_loop display_current_conn ] my id is 2a9dd11186ee39434fc36139417a1a83ab671f79cc9a3d112f6c49f8302ccf02a1 node.cpp:1626
2015-08-21T09:07:54 p2p:p2p_network_connect_loop display_current_conn ] active: 176.221.43.130:33323 with 0a69dc25fca6f9e85284deda98c92a51bc0f659adfa976e017ebf61620ab32d2c1 [outbound] node.cpp:1633
...
2015-08-21T09:07:57 p2p:message read_loop on_message ] handling message current_time_reply_message_type 6d669ff6f384895b812201eb0187f850f0bd8224 size 24 from peer 176.221.43.130:33323 node.cpp:1651
...
2015-08-21T09:08:00 p2p:terminate_inactive_connections_loop terminate_inactive_c ] Disconnecting peer 176.221.43.130:33323 because they didn't respond to my request for sync item ids after 0000000000000000000000000000000000000000 node.cpp:1317
2015-08-21T09:08:04 p2p:p2p_network_connect_loop display_current_conn ] handshaking: 176.221.43.130:33323 with 000000000000000000000000000000000000000000000000000000000000000000 [unknown] node.cpp:1640
2015-08-21T09:08:04 p2p:connect_to_task connect_to ] fatal: error connecting to peer 176.221.43.130:33323: 0 exception: unspecified
Cannot assign requested address
{"message":"Cannot assign requested address"}
asio asio.cpp:59 error_handler peer_connection.cpp:255
My node crashed while generating a block. >:(
No interesting info found in logs.
Will launch in gdb this time.
Code: [Select]2015-08-21T09:07:54 p2p:p2p_network_connect_loop display_current_conn ] my id is 2a9dd11186ee39434fc36139417a1a83ab671f79cc9a3d112f6c49f8302ccf02a1 node.cpp:1626
2015-08-21T09:07:54 p2p:p2p_network_connect_loop display_current_conn ] active: 176.221.43.130:33323 with 0a69dc25fca6f9e85284deda98c92a51bc0f659adfa976e017ebf61620ab32d2c1 [outbound] node.cpp:1633
...
2015-08-21T09:07:57 p2p:message read_loop on_message ] handling message current_time_reply_message_type 6d669ff6f384895b812201eb0187f850f0bd8224 size 24 from peer 176.221.43.130:33323 node.cpp:1651
...
2015-08-21T09:08:00 p2p:terminate_inactive_connections_loop terminate_inactive_c ] Disconnecting peer 176.221.43.130:33323 because they didn't respond to my request for sync item ids after 0000000000000000000000000000000000000000 node.cpp:1317Code: [Select]2015-08-21T09:08:04 p2p:p2p_network_connect_loop display_current_conn ] handshaking: 176.221.43.130:33323 with 000000000000000000000000000000000000000000000000000000000000000000 [unknown] node.cpp:1640
2015-08-21T09:08:04 p2p:connect_to_task connect_to ] fatal: error connecting to peer 176.221.43.130:33323: 0 exception: unspecified
Cannot assign requested address
{"message":"Cannot assign requested address"}
asio asio.cpp:59 error_handler peer_connection.cpp:255
10 assert_exception: Assert Exception
item->num < _head->num + MAX_BLOCK_REORDERING:
{}
th_a fork_database.cpp:70 _push_block
{"new_block":{"previous":"0000dc531bf16468f148ff973ac29a377c02b82a","timestamp":"2015-08-21T10:19:59","witness":"1.6.91","next_secret_hash":"10afef88b577df92e7c64e01c6813c57deece7f9","previous_secret":"1313e555ed2c3b697736fcb7daf888d53a347ec7","transaction_merkle_root":"0000000000000000000000000000000000000000","extensions":[],"witness_signature":"1f78b7eac30a0c237f3cc16ab454f6e51cce267538df0c462a76f7ce59204f55572da90894b55ff90d488b0341c4a885ac043f855d4405e9f243174c5ed0ad6041","transactions":[]}}
th_a db_block.cpp:176 _push_block
Anybody having problems with transactions?
It has taking me lots of attempts to import my balances, and I cannot vote myself. I had but the votes remain 0.
get_dynamic_global_properties
{
"id": "2.1.0",
"random": "e96759cb979ef4620404d4a116c64d4e6b9bd0e8",
"head_block_number": 43793,
"head_block_id": "0000ab110f83c3641467c59ae7daec0c26e9537d",
"time": "2015-08-21T06:35:29",
"current_witness": "1.6.72",
"next_maintenance_time": "2015-08-21T06:40:00",
"witness_budget": 95817007,
"accounts_registered_this_interval": 0,
"recently_missed_count": 0,
"dynamic_flags": 0
}
2344668ms p2p node.cpp:4404 dump_node_status ] ----------------- PEER STATUS UPDATE --------------------
2344668ms p2p node.cpp:4407 dump_node_status ] number of peers: 1 active, 0, 0 closing. attempting to maintain 20 - 200 peers
2344668ms p2p node.cpp:4412 dump_node_status ] active peer 45.55.6.216:1776 peer_is_in_sync_with_us:true we_are_in_sync_with_peer:true
2344668ms p2p node.cpp:4425 dump_node_status ] --------- MEMORY USAGE ------------
2344668ms p2p node.cpp:4426 dump_node_status ] node._active_sync_requests size: 0
2344669ms p2p node.cpp:4427 dump_node_status ] node._received_sync_items size: 3588
2344669ms p2p node.cpp:4428 dump_node_status ] node._new_received_sync_items size: 0
2344669ms p2p node.cpp:4429 dump_node_status ] node._items_to_fetch size: 0
2344669ms p2p node.cpp:4430 dump_node_status ] node._new_inventory size: 0
2344669ms p2p node.cpp:4431 dump_node_status ] node._message_cache size: 2
2344669ms p2p node.cpp:4434 dump_node_status ] peer 45.55.6.216:1776
2344669ms p2p node.cpp:4435 dump_node_status ] peer.ids_of_items_to_get size: 0
2344669ms p2p node.cpp:4436 dump_node_status ] peer.inventory_peer_advertised_to_us size: 5
2344670ms p2p node.cpp:4437 dump_node_status ] peer.inventory_advertised_to_peer size: 104
2344670ms p2p node.cpp:4438 dump_node_status ] peer.items_requested_from_peer size: 0
2344670ms p2p node.cpp:4439 dump_node_status ] peer.sync_items_requested_from_peer size: 0
2344670ms p2p node.cpp:4441 dump_node_status ] --------- END MEMORY USAGE ------------
2345990ms p2p node.cpp:1340 terminate_inactive_c ] Sending a keepalive message to peer 45.55.6.216:1776 who hasn't sent us any messages in the last 37 seconds
get_dynamic_global_properties
{
"id": "2.1.0",
"random": "35ece2ef7aac03b97de7c75e54830f57d2c8100c",
"head_block_number": 64591,
"head_block_id": "0000fc4f697a5655e44786cfea52ab4c6025ea86",
"time": "2015-08-21T12:46:15",
"current_witness": "1.6.85",
"next_maintenance_time": "2015-08-21T12:50:00",
"witness_budget": 51808059,
"accounts_registered_this_interval": 0,
"recently_missed_count": 0,
"dynamic_flags": 0
}
Meanwhile you're in it seems.Anybody having problems with transactions?
It has taking me lots of attempts to import my balances, and I cannot vote myself. I had but the votes remain 0.
you can vote me in to test ;) @ betaxtrade
Restarted my node and seem to be back on chain:Code: [Select]get_dynamic_global_properties
{
"id": "2.1.0",
"random": "35ece2ef7aac03b97de7c75e54830f57d2c8100c",
"head_block_number": 64591,
"head_block_id": "0000fc4f697a5655e44786cfea52ab4c6025ea86",
"time": "2015-08-21T12:46:15",
"current_witness": "1.6.85",
"next_maintenance_time": "2015-08-21T12:50:00",
"witness_budget": 51808059,
"accounts_registered_this_interval": 0,
"recently_missed_count": 0,
"dynamic_flags": 0
}
Seems like the main issue I'm having is that I'm not getting enough connections. If every node is only getting one or two connections, and running lower performance servers, it could be forking from latency issues. How many connections are other people getting?
Meanwhile you're in it seems.Anybody having problems with transactions?
It has taking me lots of attempts to import my balances, and I cannot vote myself. I had but the votes remain 0.
you can vote me in to test ;) @ betaxtrade
2015-08-21T13:00:01 p2p:message read_loop on_connection_closed ] Remote peer 50.116.4.37:1676 closed their connection to us $2015-08-21T13:00:01 p2p:message read_loop display_current_conn ] Currently have 16 of [20/200] connections $2015-08-21T13:00:01 p2p:message read_loop display_current_conn ] my id is a7cb5e17e25898bb1bef2feb31a341c4c22bc489e700dfbe41$2015-08-21T13:00:01 p2p:message read_loop display_current_conn ] active: 66.41.46.104:59770 with 2ca89a93fe6239371f1498$2015-08-21T13:00:01 p2p:message read_loop display_current_conn ] handshaking: 104.236.82.250:2009 with 000000000000000000000$2015-08-21T13:00:01 p2p:message read_loop display_current_conn ] handshaking: 46.101.12.138:1776 with 0000000000000000000000$2015-08-21T13:00:01 p2p:message read_loop display_current_conn ] handshaking: 104.223.111.102:1776 with 00000000000000000000$2015-08-21T13:00:01 p2p:message read_loop display_current_conn ] handshaking: 213.52.129.25:1776 with 0000000000000000000000$2015-08-21T13:00:01 p2p:message read_loop display_current_conn ] handshaking: 188.166.63.136:1776 with 000000000000000000000$2015-08-21T13:00:01 p2p:message read_loop display_current_conn ] handshaking: 46.226.109.66:1779 with 0000000000000000000000$2015-08-21T13:00:01 p2p:message read_loop display_current_conn ] handshaking: 188.166.63.136:36395 with 00000000000000000000$2015-08-21T13:00:01 p2p:message read_loop display_current_conn ] handshaking: 188.166.31.251:2881 with 000000000000000000000$
2121482ms th_a application.cpp:348 handle_block ] Got block #43785 from network
2122598ms th_a application.cpp:348 handle_block ] Got block #43786 from network
2123581ms th_a application.cpp:348 handle_block ] Got block #43787 from network
2124485ms th_a application.cpp:348 handle_block ] Got block #43788 from network
2125530ms th_a application.cpp:348 handle_block ] Got block #43789 from network
2126480ms th_a application.cpp:348 handle_block ] Got block #43790 from network
2131760ms th_a application.cpp:348 handle_block ] Got block #43791 from network
2132949ms th_a application.cpp:348 handle_block ] Got block #43795 from network
2132949ms th_a fork_database.cpp:57 push_block ] Pushing block to fork database that failed to link.
2133339ms th_a application.cpp:348 handle_block ] Got block #43796 from network
2133340ms th_a fork_database.cpp:57 push_block ] Pushing block to fork database that failed to link.
2133543ms th_a application.cpp:348 handle_block ] Got block #43797 from network
2133543ms th_a fork_database.cpp:57 push_block ] Pushing block to fork database that failed to link.
2134473ms th_a application.cpp:348 handle_block ] Got block #43798 from network
2134474ms th_a fork_database.cpp:57 push_block ] Pushing block to fork database that failed to link.
2135482ms th_a application.cpp:348 handle_block ] Got block #43799 from network
2135482ms th_a fork_database.cpp:57 push_block ] Pushing block to fork database that failed to link.
2136479ms th_a application.cpp:348 handle_block ] Got block #43800 from network
2136479ms th_a fork_database.cpp:57 push_block ] Pushing block to fork database that failed to link.
2137482ms th_a application.cpp:348 handle_block ] Got block #43801 from network
2137482ms th_a fork_database.cpp:57 push_block ] Pushing block to fork database that failed to link.
[......snip......]
2160479ms th_a application.cpp:348 handle_block ] Got block #43822 from network
2160479ms th_a fork_database.cpp:57 push_block ] Pushing block to fork database that failed to link.
2161493ms th_a application.cpp:348 handle_block ] Got block #43823 from network
2161494ms th_a application.cpp:370 handle_block ] Error when pushing block:
10 assert_exception: Assert Exception
item->num < _head->num + MAX_BLOCK_REORDERING:
[..snip..]
2162497ms th_a application.cpp:348 handle_block ] Got block #43824 from network
2162498ms th_a application.cpp:370 handle_block ] Error when pushing block:
10 assert_exception: Assert Exception
item->num < _head->num + MAX_BLOCK_REORDERING:
[..etc..]
get_witness roadscape
{
"id": "1.6.5249",
"witness_account": "1.2.67334",
807002ms th_a witness.cpp:240 block_production_loo ] slot: 1 scheduled_witness: 1.6.5249 scheduled_time: 2015-08-21T16:13:27 now: 2015-08-21T16:13:27
807002ms th_a witness.cpp:243 block_production_loo ] Witness 1.6.5249 production slot has arrived; generating a block now...
807002ms th_a witness.cpp:266 block_production_loo ] Got exception while generating block:
10 assert_exception: Assert Exception
witness_obj.signing_key == block_signing_private_key.get_public_key():
{}
th_a db_block.cpp:266 _generate_block
{"witness_id":"1.6.5249"}
th_a db_block.cpp:315 _generate_block
Finally got the witness registered and the witness_node syncing reliably.Code: [Select]get_witness roadscape
{
"id": "1.6.5249",
"witness_account": "1.2.67334",
But haven't yet produced a block:Code: [Select]807002ms th_a witness.cpp:240 block_production_loo ] slot: 1 scheduled_witness: 1.6.5249 scheduled_time: 2015-08-21T16:13:27 now: 2015-08-21T16:13:27
807002ms th_a witness.cpp:243 block_production_loo ] Witness 1.6.5249 production slot has arrived; generating a block now...
807002ms th_a witness.cpp:266 block_production_loo ] Got exception while generating block:
10 assert_exception: Assert Exception
witness_obj.signing_key == block_signing_private_key.get_public_key():
{}
th_a db_block.cpp:266 _generate_block
{"witness_id":"1.6.5249"}
th_a db_block.cpp:315 _generate_block
Will continue to fiddle with it..
1522956ms th_a witness.cpp:240 block_production_loo ] slot: 1 scheduled_witness: 1.6.1537 scheduled_time: 2015-08-21T16:25:21 now: 2015-08-21T16:25:21
1523956ms th_a witness.cpp:240 block_production_loo ] slot: 2 scheduled_witness: 1.6.56 scheduled_time: 2015-08-21T16:25:22 now: 2015-08-21T16:25:22
1524185ms th_a application.cpp:348 handle_block ] Got block #76718 from network
1524234ms th_a application.cpp:443 get_item ] Request for item {"item_type":1001,"item_hash":"00012bae7445dfa925a756d82e6f68d9be6e20be"}
1524234ms th_a application.cpp:451 get_item ] Serving up block #76718
1524386ms th_a application.cpp:443 get_item ] Request for item {"item_type":1001,"item_hash":"00012bae7445dfa925a756d82e6f68d9be6e20be"}
1524386ms th_a application.cpp:451 get_item ] Serving up block #76718
puppies I have just "flood" you some 1 CORE transactions.sorry betax I'm at work and don't have puppies keys on any vps
On my second round I sent 8,421. :o :o :o :o at 1 core interval. This was done brute forced Hyper Olympics style https://www.youtube.com/watch?v=8va4YGGA3wE, so I am not convinced I did submit as many in 1 minute.
Anyway my witness has not died, can you check you got something?
Btw: How do you read this:Code: [Select]1522956ms th_a witness.cpp:240 block_production_loo ] slot: 1 scheduled_witness: 1.6.1537 scheduled_time: 2015-08-21T16:25:21 now: 2015-08-21T16:25:21
1523956ms th_a witness.cpp:240 block_production_loo ] slot: 2 scheduled_witness: 1.6.56 scheduled_time: 2015-08-21T16:25:22 now: 2015-08-21T16:25:22
1524185ms th_a application.cpp:348 handle_block ] Got block #76718 from network
1524234ms th_a application.cpp:443 get_item ] Request for item {"item_type":1001,"item_hash":"00012bae7445dfa925a756d82e6f68d9be6e20be"}
1524234ms th_a application.cpp:451 get_item ] Serving up block #76718
1524386ms th_a application.cpp:443 get_item ] Request for item {"item_type":1001,"item_hash":"00012bae7445dfa925a756d82e6f68d9be6e20be"}
1524386ms th_a application.cpp:451 get_item ] Serving up block #76718
Did Witness 1.6.1537 produce his block? Why is the init-witness directly "queueing up" or getting another slot then?
puppies I have just "flood" you some 1 CORE transactions.
On my second round I sent 8,421. :o :o :o :o at 1 core interval. This was done brute forced Hyper Olympics style https://www.youtube.com/watch?v=8va4YGGA3wE, so I am not convinced I did submit as many in 1 minute.
Anyway my witness has not died, can you check you got something?
Somehow I think I lost the private signing key because dump_private_keys only shows one set, and it belongs to the account itself. I guess I will have to register a new witness.. unless there's a way to generate a new signing key pair?Btw: How do you read this:Code: [Select]1522956ms th_a witness.cpp:240 block_production_loo ] slot: 1 scheduled_witness: 1.6.1537 scheduled_time: 2015-08-21T16:25:21 now: 2015-08-21T16:25:21
1523956ms th_a witness.cpp:240 block_production_loo ] slot: 2 scheduled_witness: 1.6.56 scheduled_time: 2015-08-21T16:25:22 now: 2015-08-21T16:25:22
1524185ms th_a application.cpp:348 handle_block ] Got block #76718 from network
1524234ms th_a application.cpp:443 get_item ] Request for item {"item_type":1001,"item_hash":"00012bae7445dfa925a756d82e6f68d9be6e20be"}
1524234ms th_a application.cpp:451 get_item ] Serving up block #76718
1524386ms th_a application.cpp:443 get_item ] Request for item {"item_type":1001,"item_hash":"00012bae7445dfa925a756d82e6f68d9be6e20be"}
1524386ms th_a application.cpp:451 get_item ] Serving up block #76718
Did Witness 1.6.1537 produce his block? Why is the init-witness directly "queueing up" or getting another slot then?
I read it as 1.6.1537 missed the slot, and 1.6.56 filled in.
What's your chain_id?
edit: I'm on d011....5083 which appears to be the legit one
puppies I have just "flood" you some 1 CORE transactions.sorry betax I'm at work and don't have puppies keys on any vps
On my second round I sent 8,421. :o :o :o :o at 1 core interval. This was done brute forced Hyper Olympics style https://www.youtube.com/watch?v=8va4YGGA3wE, so I am not convinced I did submit as many in 1 minute.
Anyway my witness has not died, can you check you got something?
What's your chain_id?
edit: I'm on d011....5083 which appears to be the legit one
yes, that's the one I'm on. got it running nonstop now since yesterday. My node also seems to be used as seed-node throughout the day. Possibly the pattern doesn't repeat for all non-init-witnesses but for most. Is there some wallet-method to see block-statistics for witnesses?
puppies I have just "flood" you some 1 CORE transactions.
On my second round I sent 8,421. :o :o :o :o at 1 core interval. This was done brute forced Hyper Olympics style https://www.youtube.com/watch?v=8va4YGGA3wE, so I am not convinced I did submit as many in 1 minute.
Anyway my witness has not died, can you check you got something?
How did you do this? Can you send me some to delegate-clayop?
Somehow I think I lost the private signing key because dump_private_keys only shows one set, and it belongs to the account itself. I guess I will have to register a new witness.. unless there's a way to generate a new signing key pair?Btw: How do you read this:Code: [Select]1522956ms th_a witness.cpp:240 block_production_loo ] slot: 1 scheduled_witness: 1.6.1537 scheduled_time: 2015-08-21T16:25:21 now: 2015-08-21T16:25:21
1523956ms th_a witness.cpp:240 block_production_loo ] slot: 2 scheduled_witness: 1.6.56 scheduled_time: 2015-08-21T16:25:22 now: 2015-08-21T16:25:22
1524185ms th_a application.cpp:348 handle_block ] Got block #76718 from network
1524234ms th_a application.cpp:443 get_item ] Request for item {"item_type":1001,"item_hash":"00012bae7445dfa925a756d82e6f68d9be6e20be"}
1524234ms th_a application.cpp:451 get_item ] Serving up block #76718
1524386ms th_a application.cpp:443 get_item ] Request for item {"item_type":1001,"item_hash":"00012bae7445dfa925a756d82e6f68d9be6e20be"}
1524386ms th_a application.cpp:451 get_item ] Serving up block #76718
Did Witness 1.6.1537 produce his block? Why is the init-witness directly "queueing up" or getting another slot then?
I read it as 1.6.1537 missed the slot, and 1.6.56 filled in.
Left it running overnight and found this:Quote2121482ms th_a application.cpp:348 handle_block ] Got block #43785 from network
2122598ms th_a application.cpp:348 handle_block ] Got block #43786 from network
2123581ms th_a application.cpp:348 handle_block ] Got block #43787 from network
2124485ms th_a application.cpp:348 handle_block ] Got block #43788 from network
2125530ms th_a application.cpp:348 handle_block ] Got block #43789 from network
2126480ms th_a application.cpp:348 handle_block ] Got block #43790 from network
2131760ms th_a application.cpp:348 handle_block ] Got block #43791 from network
2132949ms th_a application.cpp:348 handle_block ] Got block #43795 from network
2132949ms th_a fork_database.cpp:57 push_block ] Pushing block to fork database that failed to link.
2133339ms th_a application.cpp:348 handle_block ] Got block #43796 from network
2133340ms th_a fork_database.cpp:57 push_block ] Pushing block to fork database that failed to link.
2133543ms th_a application.cpp:348 handle_block ] Got block #43797 from network
2133543ms th_a fork_database.cpp:57 push_block ] Pushing block to fork database that failed to link.
2134473ms th_a application.cpp:348 handle_block ] Got block #43798 from network
2134474ms th_a fork_database.cpp:57 push_block ] Pushing block to fork database that failed to link.
2135482ms th_a application.cpp:348 handle_block ] Got block #43799 from network
2135482ms th_a fork_database.cpp:57 push_block ] Pushing block to fork database that failed to link.
2136479ms th_a application.cpp:348 handle_block ] Got block #43800 from network
2136479ms th_a fork_database.cpp:57 push_block ] Pushing block to fork database that failed to link.
2137482ms th_a application.cpp:348 handle_block ] Got block #43801 from network
2137482ms th_a fork_database.cpp:57 push_block ] Pushing block to fork database that failed to link.
[......snip......]
2160479ms th_a application.cpp:348 handle_block ] Got block #43822 from network
2160479ms th_a fork_database.cpp:57 push_block ] Pushing block to fork database that failed to link.
2161493ms th_a application.cpp:348 handle_block ] Got block #43823 from network
2161494ms th_a application.cpp:370 handle_block ] Error when pushing block:
10 assert_exception: Assert Exception
item->num < _head->num + MAX_BLOCK_REORDERING:
[..snip..]
2162497ms th_a application.cpp:348 handle_block ] Got block #43824 from network
2162498ms th_a application.cpp:370 handle_block ] Error when pushing block:
10 assert_exception: Assert Exception
item->num < _head->num + MAX_BLOCK_REORDERING:
[..etc..]
Somehow I think I lost the private signing key because dump_private_keys only shows one set, and it belongs to the account itself. I guess I will have to register a new witness.. unless there's a way to generate a new signing key pair?Btw: How do you read this:Code: [Select]1522956ms th_a witness.cpp:240 block_production_loo ] slot: 1 scheduled_witness: 1.6.1537 scheduled_time: 2015-08-21T16:25:21 now: 2015-08-21T16:25:21
1523956ms th_a witness.cpp:240 block_production_loo ] slot: 2 scheduled_witness: 1.6.56 scheduled_time: 2015-08-21T16:25:22 now: 2015-08-21T16:25:22
1524185ms th_a application.cpp:348 handle_block ] Got block #76718 from network
1524234ms th_a application.cpp:443 get_item ] Request for item {"item_type":1001,"item_hash":"00012bae7445dfa925a756d82e6f68d9be6e20be"}
1524234ms th_a application.cpp:451 get_item ] Serving up block #76718
1524386ms th_a application.cpp:443 get_item ] Request for item {"item_type":1001,"item_hash":"00012bae7445dfa925a756d82e6f68d9be6e20be"}
1524386ms th_a application.cpp:451 get_item ] Serving up block #76718
Did Witness 1.6.1537 produce his block? Why is the init-witness directly "queueing up" or getting another slot then?
I read it as 1.6.1537 missed the slot, and 1.6.56 filled in.
Ben had this problem too. Will look into this.
Just found some miutes to look at it again (partly for recreational purposes) and now I saw 1.6.1537 filling his block. So I guess it's really that simple, that most witnesses actually do not produce blocks yet or anymore. Even though I see a lot of connected nodes at the moment and one would expect them to be manned and producing?
puppies I have just "flood" you some 1 CORE transactions.sorry betax I'm at work and don't have puppies keys on any vps
On my second round I sent 8,421. :o :o :o :o at 1 core interval. This was done brute forced Hyper Olympics style https://www.youtube.com/watch?v=8va4YGGA3wE, so I am not convinced I did submit as many in 1 minute.
Anyway my witness has not died, can you check you got something?
Don't worry!
unlocked >>> get_account_history puppies 10
get_account_history puppies 10
2015-08-21T03:05:27 Update Account 'puppies' (Fee: 20.14453 CORE)
2015-08-21T02:36:35 Update Account 'puppies' (Fee: 20.14062 CORE)
2015-08-21T02:03:08 balance_claim_operation puppies fee: 0 CORE
2015-08-21T02:03:08 balance_claim_operation puppies fee: 0 CORE
2015-08-21T02:03:08 balance_claim_operation puppies fee: 0 CORE
2015-08-21T02:03:08 balance_claim_operation puppies fee: 0 CORE
2015-08-21T02:03:08 balance_claim_operation puppies fee: 0 CORE
2015-08-21T02:03:08 balance_claim_operation puppies fee: 0 CORE
2015-08-21T02:03:08 balance_claim_operation puppies fee: 0 CORE
2015-08-21T02:03:08 balance_claim_operation puppies fee: 0 CORE
unlocked >>>[Code]
# Network Protocol 2
Building a low-latency network requires P2P nodes that have low-latency
connections and a protocol designed to minimize latency. for the purpose
of this document we will assume that two nodes are located on opposite
sides of the globe with a ping time of 250ms.
## Announce, Request, Send Protocol
Under the prior network archtiecture, transactions and blocks were broadcast
in a manner similar to the Bitcoin protocol: inventory messages notify peers of
transactions and blocks, then peers fetch the transaction or block from one
peer. After validating the item a node will broadcast an inventory message to
its peers.
Under this model it will take 0.75 seconds for a peer to communicate a transaction
or block to another peer even if their size was 0 and there was no processing overhead.
This level of performance is unacceptable for a network attempting to produce one block
every second.
This prior protocol also sent every transaction twice: initial broadcast, and again as
part of a block.
## Push Protocol
To minimize latency each node needs to immediately broadcast the data it receives
to its peers after validating it. Given the average transaction size is less than
100 bytes, it is almost as effecient to send the transaction as it is to send
the notice (assuming a 20 byte transaction id)
Each node implements the following protocol:
onReceiveTransaction( from_peer, transaction )
if( isKnown( transaction.id() ) )
return
markKnown( transaction.id() )
if( !validate( transaction ) )
return
for( peer : peers )
if( peer != from_peer )
send( peer, transaction )
onReceiveBlock( from_peer, block_summary )
if( isKnown( block_summary )
return
full_block = reconstructFullBlcok( from_peer, block_summary )
if( !full_block ) disconnect from_peer
markKnown( block_summary )
if( !pushBlock( full_block ) ) disconnect from_peer
for( peer : peers )
if( peer != from_peer )
send( peer, block_summary )
onConnect( new_peer, new_peer_head_block_num )
if( peers.size() >= max_peers )
send( new_peer, peers )
disconnect( new_peer )
return
while( new_peer_head_block_num < our_head_block_num )
sendFullBlock( new_peer, ++new_peer_head_block_num )
new_peer.synced = true
for( peer : peers )
send( peer, new_peer )
onReceivePeers( from_peer, peers )
addToPotentialPeers( peers )
onUpdateConnectionsTimer
if( peers.size() < desired_peers )
connect( random_potential_peer )
onFullBlock( from_peer, full_block )
if( !pushBlock( full_block ) ) disconnect from_peer
onStartup
init_potential_peers from config
start onUpdateConnectionsTimer
Under the new protocol transactions only get sent once (vs twice under the current protocol) so bandwidth should be lower and block latencies will be lower because we do not send transaction data with the blocks like we do today.
0 exception: unspecified
3030001 tx_missing_active_auth: missing required active authority
Under the new protocol transactions only get sent once (vs twice under the current protocol) so bandwidth should be lower and block latencies will be lower because we do not send transaction data with the blocks like we do today.
1475279ms th_a application.cpp:348 handle_block ] Got block #113022 from network
1475280ms th_a application.cpp:370 handle_block ] Error when pushing block:
10 assert_exception: Assert Exception
second_branch_itr != _index.get<block_id>().end():
{}
th_a fork_database.cpp:186 fetch_branch_from
{"first":"0001b97efde7f8735edf4f0e9a0bf8861bedc272","second":"0001b8e8da3980ecd944f089517017859760d770"}
th_a fork_database.cpp:217 fetch_branch_from
{"new_block":{"previous":"0001b97dcd02515b154d14bee9b4c5a15b05e03c","timestamp":"2015-08-22T03:24:35","witness":"1.6.64","next_secret_hash":"baf2320a0c7779529c41e52ee54e661140034669","previous_secret":"49b271b871ea785fae64cba2f1e10f7bcd218cc2","transaction_merkle_root":"0000000000000000000000000000000000000000","extensions":[],"witness_signature":"201195222d96d0b21529707eb7ae0d3aa7f653da76bbc1495a72e5b0d0c710ac6e66a4cb06502d1649e1b702d8c951c0e0705ad0c89a6a9c1ed0ba52dfb2b62324","transactions":[]}}
th_a db_block.cpp:176 _push_block
[....snip....]
1475783ms th_a application.cpp:443 get_item ] Request for item {"item_type":1001,"item_hash":"0001ae6ac8e3a8a26cc2aecb308c1150892e7fe4"}
1475783ms th_a application.cpp:451 get_item ] Serving up block #110186
1475784ms th_a application.cpp:443 get_item ] Request for item {"item_type":1001,"item_hash":"0001ae6bc410e0e184a6d0a024a612b05ce4c01b"}
1475784ms th_a application.cpp:451 get_item ] Serving up block #110187
1475784ms th_a application.cpp:443 get_item ] Request for item {"item_type":1001,"item_hash":"0001ae6c60d1aec7d5c24446dbf842a4b601033d"}
1475784ms th_a application.cpp:451 get_item ] Serving up block #110188
1475784ms th_a application.cpp:443 get_item ] Request for item {"item_type":1001,"item_hash":"0001ae6d84528ff70008714ca1ed7977310535ac"}
1475785ms th_a application.cpp:451 get_item ] Serving up block #110189
1475785ms th_a application.cpp:443 get_item ] Request for item {"item_type":1001,"item_hash":"0001ae6e323f7799e02346474a64e890fa79ab11"}
1475785ms th_a application.cpp:451 get_item ] Serving up block #110190
1475785ms th_a application.cpp:443 get_item ] Request for item {"item_type":1001,"item_hash":"0001ae6fbc4e341c9d0387ab939ed1d7f99f7497"}
1475785ms th_a application.cpp:451 get_item ] Serving up block #110191
1476037ms th_a witness.cpp:240 block_production_loo ] slot: 166 scheduled_witness: 1.6.1530 scheduled_time: 2015-08-22T03:24:36 now: 2015-08-22T03:24:36
1476037ms th_a witness.cpp:243 block_production_loo ] Witness 1.6.1530 production slot has arrived; generating a block now...
Segmentation fault (core dumped)
1800055ms th_a application.cpp:348 handle_block ] Got block #116582 from network
1800055ms th_a fork_database.cpp:57 push_block ] Pushing block to fork database that failed to link.
1800171ms th_a application.cpp:370 handle_block ] Error when pushing block:
13 NSt8ios_base7failureE: basic_ios::clear
basic_ios::clear:
{"new_block":{"previous":"0001c7654456dc221956bf0b1a9f20e686865f97","timestamp":"2015-08-22T04:30:00","witness":"1.6.29","next_secret_hash":"2df6b7104fa19d2c938a9636b5bc6b7eb20b7900","previous_secret":"c55c74f80a07f39dc6996976894c1cf8a507ec78","transaction_merkle_root":"0000000000000000000000000000000000000000","extensions":[],"witness_signature":"20427e5c13761a8923e780808a1b85d32376cc556fb60ef7e828a84a1ae22beefd6c03c2e271ac718573237ffc53a4b97572373ee5c7ce8b553c66a73592ee8e6c","transactions":[]},"what":"basic_ios::clear"}
th_a db_block.cpp:176 _push_block
1800336ms th_a application.cpp:492 get_blockchain_synop ] reference_point: 00018b78f3052a0ce0fbac70df02f31634305dc1 number_of_blocks_after_reference_point: 0 result: ["0000c756375599f45ec17cbbc52a18f31b25abf6","00014756adffcc733dca7791fedaf42573352c9c","000187564a3607bd8feaf45ede04a2a4e44ad83e","0001a7561760fe72f9de51d3a5e3bc4415d3463e","0001b7560bfefc10d57f0a18f4b415bc986e7a16","0001bf56be84f86a5b4bdca2008ffba59c5ce198","0001c35601b57967bf33f286f1b44b7d4646bca0","0001c5569d398257fcd046cfc6c86fc2765fd3e3","0001c656ec0cee80adbf59f139cf19bcd3d598a6","0001c6d69e121f19e81a9d7faace9188ec247291","0001c716a2707e4331683f820f70e9e51abbe1b0","0001c73676aeaf757eb2de7cf6a5b829372e3c9a","0001c746a973f6e9c1871038af253bd6e848ff6d","0001c74ef58f2c455a58a2813fcc566148d9f85c","0001c7521b40eff5ff0e580031c8c3011400103b","0001c7544b407028c1d6456f6cf6648fa9701af6","0001c75594757f5330406d8d237547d13fc4ea32","0001c756f0ba255380208bca3bf6a7e7cd44690f"]
1800500ms th_a application.cpp:492 get_blockchain_synop ] reference_point: 0001ada79445f13682e34a31fa122cbcbbfba68f number_of_blocks_after_reference_point: 0 result: ["0000c756375599f45ec17cbbc52a18f31b25abf6","00014756adffcc733dca7791fedaf42573352c9c","000187564a3607bd8feaf45ede04a2a4e44ad83e","0001a7561760fe72f9de51d3a5e3bc4415d3463e","0001b7560bfefc10d57f0a18f4b415bc986e7a16","0001bf56be84f86a5b4bdca2008ffba59c5ce198","0001c35601b57967bf33f286f1b44b7d4646bca0","0001c5569d398257fcd046cfc6c86fc2765fd3e3","0001c656ec0cee80adbf59f139cf19bcd3d598a6","0001c6d69e121f19e81a9d7faace9188ec247291","0001c716a2707e4331683f820f70e9e51abbe1b0","0001c73676aeaf757eb2de7cf6a5b829372e3c9a","0001c746a973f6e9c1871038af253bd6e848ff6d","0001c74ef58f2c455a58a2813fcc566148d9f85c","0001c7521b40eff5ff0e580031c8c3011400103b","0001c7544b407028c1d6456f6cf6648fa9701af6","0001c75594757f5330406d8d237547d13fc4ea32","0001c756f0ba255380208bca3bf6a7e7cd44690f"]
1800688ms th_a application.cpp:492 get_blockchain_synop ] reference_point: 00018b78f3052a0ce0fbac70df02f31634305dc1 number_of_blocks_after_reference_point: 0 result: ["0000c756375599f45ec17cbbc52a18f31b25abf6","00014756adffcc733dca7791fedaf42573352c9c","000187564a3607bd8feaf45ede04a2a4e44ad83e","0001a7561760fe72f9de51d3a5e3bc4415d3463e","0001b7560bfefc10d57f0a18f4b415bc986e7a16","0001bf56be84f86a5b4bdca2008ffba59c5ce198","0001c35601b57967bf33f286f1b44b7d4646bca0","0001c5569d398257fcd046cfc6c86fc2765fd3e3","0001c656ec0cee80adbf59f139cf19bcd3d598a6","0001c6d69e121f19e81a9d7faace9188ec247291","0001c716a2707e4331683f820f70e9e51abbe1b0","0001c73676aeaf757eb2de7cf6a5b829372e3c9a","0001c746a973f6e9c1871038af253bd6e848ff6d","0001c74ef58f2c455a58a2813fcc566148d9f85c","0001c7521b40eff5ff0e580031c8c3011400103b","0001c7544b407028c1d6456f6cf6648fa9701af6","0001c75594757f5330406d8d237547d13fc4ea32","0001c756f0ba255380208bca3bf6a7e7cd44690f"]
1801000ms th_a witness.cpp:240 block_production_loo ] slot: 18 scheduled_witness: 1.6.1530 scheduled_time: 2015-08-22T04:30:01 now: 2015-08-22T04:30:01
1801000ms th_a witness.cpp:243 block_production_loo ] Witness 1.6.1530 production slot has arrived; generating a block now...
1801003ms th_a fork_database.cpp:57 push_block ] Pushing block to fork database that failed to link.
1801191ms th_a fork_database.cpp:57 push_block ] Pushing block to fork database that failed to link.
1801332ms th_a fork_database.cpp:57 push_block ] Pushing block to fork database that failed to link.
1801448ms th_a fork_database.cpp:57 push_block ] Pushing block to fork database that failed to link.
1801557ms th_a fork_database.cpp:57 push_block ] Pushing block to fork database that failed to link.
1801665ms th_a fork_database.cpp:57 push_block ] Pushing block to fork database that failed to link.
1801772ms th_a fork_database.cpp:57 push_block ] Pushing block to fork database that failed to link.
1801880ms th_a fork_database.cpp:57 push_block ] Pushing block to fork database that failed to link.
1801994ms th_a fork_database.cpp:57 push_block ] Pushing block to fork database that failed to link.
1802108ms th_a fork_database.cpp:57 push_block ] Pushing block to fork database that failed to link.
1802217ms th_a fork_database.cpp:57 push_block ] Pushing block to fork database that failed to link.
1802331ms th_a fork_database.cpp:57 push_block ] Pushing block to fork database that failed to link.
1802440ms th_a fork_database.cpp:57 push_block ] Pushing block to fork database that failed to link.
1802549ms th_a fork_database.cpp:57 push_block ] Pushing block to fork database that failed to link.
1802657ms th_a fork_database.cpp:57 push_block ] Pushing block to fork database that failed to link.
1802766ms th_a fork_database.cpp:57 push_block ] Pushing block to fork database that failed to link.
Segmentation fault (core dumped)
get_witness bitshares-argentina
{
"id": "1.6.707",
"witness_account": "1.2.8572",
3514165ms th_a thread.cpp:95 thread ] name:ntp tid:140288926934784
3514166ms ntp ntp.cpp:77 request_now ] resolving... ["pool.ntp.org",123]
3514166ms th_a thread.cpp:95 thread ] name:p2p tid:140288904390400
3514173ms th_a application.cpp:117 reset_p2p_node ] Adding seed node 104.236.51.238:1776
3514173ms ntp ntp.cpp:81 request_now ] sending request to 62.75.202.83:123
3514175ms th_a application.cpp:129 reset_p2p_node ] Configured p2p node to listen on 0.0.0.0:56445
3514177ms th_a application.cpp:179 reset_websocket_serv ] Configured websocket rpc to listen on 127.0.0.1:8090
3514177ms th_a witness.cpp:139 plugin_startup ] Launching block production for 1 witnesses.
3514178ms th_a main.cpp:165 main ] Started witness node on a chain with 127829 blocks.
3514178ms th_a main.cpp:166 main ] Chain ID is d011922587473757011118587f93afcc314fbaea094fc1055574721b27975083
3514184ms ntp ntp.cpp:147 read_loop ] received ntp reply from 62.75.202.83:123
3514184ms ntp ntp.cpp:161 read_loop ] ntp offset: 294, round_trip_delay 10707
3514184ms ntp ntp.cpp:177 read_loop ] ntp_delta_time updated to 294
3514673ms th_a application.cpp:492 get_blockchain_synop ] reference_point: 0000000000000000000000000000000000000000 number_of_blocks_after_reference_point: 0 result: ["0000f3559d98bf93dd1f7901ac2ccbd452e611bc","00017355c1af25485603b9cc7c699def8cd995dc","0001b355ff6383082c0e5f20bcdaa6048ac2ae8a","0001d3559442c82f0570f0b36981b56eff5c43c3","0001e3554a0f694c82fb1b6c14c1327172bc4b56","0001eb55257a6d3c563ecd7519141499c2360ca6","0001ef559d44b8744c47f764bb0e49dc41f557f2","0001f155035c6c94533e706ed3bc728d948f7e50","0001f255ecca702f4c77dd8ea6177c8f96ff5a3d","0001f2d544532e44118fa11695bfd6f6282c5b3b","0001f315d65810292c0807101bfb8dd62ba798a0","0001f33554daf2c617ea4b4aa99471b0b8b5f37b","0001f345a165261885cbce3c6069b2818b5a3474","0001f34d1df9d508573c9d8bb41f2a90928343f6","0001f351735c35b128914ab19a5623a461058b11","0001f3534999a19244ca2b6d0c06d0c3001bca7d","0001f3545becc61c1fae72171c8e7ee7ab2497bd","0001f3558fde448eb58f504cf29751608f5a8ebe"]
3515000ms th_a witness.cpp:240 block_production_loo ] slot: 70 scheduled_witness: 1.6.14 scheduled_time: 2015-08-22T07:58:35 now: 2015-08-22T07:58:35
3515182ms th_a application.cpp:492 get_blockchain_synop ] reference_point: 0001f3950d68963e625bb00635af319e5c92d65c number_of_blocks_after_reference_point: 0 result: ["0000f395cb92dbb98048570fe4f9d15bec86a063","000173959fe061b2101f1eb98db0c2ff76214f2d","0001b3956efdabb08e5a922c44e8ebd246d984e7","0001d39542424e5722239c5ae5acf96c52fb3a2f","0001e395e2e07858fb2a023fc3b88a48c02ae186","0001eb955d23d037dc15a3a6f1400f78b231dd8b","0001ef95740613cd17502a0eb8c4b8cc8a3a8e76","0001f1958ad16b018fb0d59aca314c5b35861402","0001f2957f0a4a8fe88626f98e85f3a148708a60","0001f315d65810292c0807101bfb8dd62ba798a0","0001f3558fde448eb58f504cf29751608f5a8ebe","0001f375b59b9d9d5e1592f50b6080663566d36c","0001f3850c9e99994e9ecaff2d3e1ff722ed03ef","0001f38d4344efed8da646f862994826c0e990fb","0001f3910665edad95cf6f8286a5b98e74f1373c","0001f393c5d1a0e37b7141f3b434c818ca87d6b2","0001f394560a95f0090ae089cff6c8e46c4425a3","0001f3950d68963e625bb00635af319e5c92d65c"]
3515349ms th_a application.cpp:492 get_blockchain_synop ] reference_point: 0001f396c3191b6a65a7da41a79cfc75c71f57d8 number_of_blocks_after_reference_point: 0 result: ["0000f3961b99f42a7ec4454156616b6933183f36","00017396077234751f0884ce6dd1554066dc2cc1","0001b3969bc73c9878e6ac262860bc0ab7c94ad1","0001d39639b9edb3aafb7e436412a37569033ec3","0001e39664d0531007c9b55dc57b22533c3e7932","0001eb9689d05637742379e327446384064d9c14","0001ef96bf242122f4470fa5c6c3185f5b067da0","0001f196880be06943bc51626fd66ff87c8653d9","0001f296e81daa34ce339d804ac55056ef3ac93d","0001f3163223c4eb858f0e0db96f6fe99130755a","0001f3563cdb919fa7544386e428fe4ee2cdf5bd","0001f3760c24b81bd20d2833b440d6f49d9615f6","0001f3867c0411c9a6ee6af1ae77d9a3b3d16a88","0001f38e037953bb72c82f0babde0361c8192fb1","0001f3928baff42b1ed9df6c96821d0906f837a6","0001f394560a95f0090ae089cff6c8e46c4425a3","0001f3950d68963e625bb00635af319e5c92d65c","0001f396c3191b6a65a7da41a79cfc75c71f57d8"]
3516000ms th_a witness.cpp:240 block_production_loo ] slot: 1 scheduled_witness: 1.6.21 scheduled_time: 2015-08-22T07:5
info
{
"head_block_num": 129066,
"head_block_id": "0001f82ae17a57d9c8250d00f14448704b603ae0",
"head_block_age": "3 seconds old",
"next_maintenance_time": "5 minutes in the future",
"chain_id": "d011922587473757011118587f93afcc314fbaea094fc1055574721b27975083",
get_dynamic_global_properties
{
"id": "2.1.0",
"random": "701a35b3f5901fa6b42dd4901c65583786ec4c5c",
"head_block_number": 129117,
"head_block_id": "0001f85d6294ba131378697035b571bd26dd8aa6",
"time": "2015-08-22T08:20:59",
"current_witness": "1.6.28",
"next_maintenance_time": "2015-08-22T08:25:00",
"witness_budget": 70776456,
"accounts_registered_this_interval": 0,
"recently_missed_count": 0,
"dynamic_flags": 0
}
Run out of disk space :(
p2p.log too large.
get_witness ihashfury
{
"id": "1.6.2561",
"witness_account": "1.2.38482",
Guys, what am I doing wrong when trying to import_balance?You need to also import the active_keyCode: [Select]0 exception: unspecified
3030001 tx_missing_active_auth: missing required active authority
You should be voted in IHashFury. Also what I have found to be easier for importing balances. If you know when the snapshot was taken you can pull up the transaction history for your account and look for a transaction which would have added bts to your account right before the snapshot. Then you can look up the transaction on the blockchain using the ID given in the transaction history(blockchain_get_transaction). From there you can use the listed active key in the json output and dump the private key from that. It should make it easier to find a key with a balance in it at the snapshot. Hope that helps
./witness_node -s "104.200.28.117:61705"
p2p stcp_socket.cpp:111 readsome message_oriented_connection.cpp:199
2015-08-22T15:40:59 p2p:message read_loop on_connection_closed ] Remote peer 176.221.43.130:33323 closed their connection to us node.cpp:2724
2015-08-22T15:40:59 p2p:message read_loop display_current_conn ] Currently have 0 of [20/200] connections node.cpp:1625
2015-08-22T15:40:59 p2p:message read_loop display_current_conn ] my id is 79f5f5a4305f4496e2cb1e3f2dd48b4031d1b535bf391c91d7279545a87ed10b06 node.cpp:1626
2015-08-22T15:40:59 p2p:message read_loop trigger_p2p_network_ ] Triggering connect loop now node.cpp:974
2015-08-22T15:40:59 p2p:message read_loop schedule_peer_for_de ] scheduling peer for deletion: 176.221.43.130:33323 (this will not block) node.cpp:1534
2015-08-22T15:40:59 p2p:message read_loop schedule_peer_for_de ] asyncing delayed_peer_deletion_task to delete 1 peers node.cpp:1539
2015-08-22T15:40:59 p2p:delayed_peer_deletion_task delayed_peer_deletio ] beginning an iteration of delayed_peer_deletion_task with 1 in queue node.cpp:1498
2015-08-22T15:40:59 p2p:delayed_peer_deletion_task destroy ] calling close_connection() peer_connection.cpp:122
2015-08-22T15:40:59 p2p:delayed_peer_deletion_task destroy ] close_connection completed normally peer_connection.cpp:124
2015-08-22T15:40:59 p2p:delayed_peer_deletion_task destroy ] canceling _send_queued_messages task peer_connection.cpp:137
2015-08-22T15:40:59 p2p:delayed_peer_deletion_task destroy ] cancel_and_wait completed normally peer_connection.cpp:139
2015-08-22T15:40:59 p2p:delayed_peer_deletion_task destroy ] canceling accept_or_connect_task peer_connection.cpp:152
2015-08-22T15:40:59 p2p:delayed_peer_deletion_task destroy ] accept_or_connect_task completed normally peer_connection.cpp:154
2015-08-22T15:40:59 p2p:delayed_peer_deletion_task destroy_connection ] in destroy_connection() for message_oriented_connection.cpp:280
2015-08-22T15:40:59 p2p:delayed_peer_deletion_task destroy_connection ] Exception thrown while canceling message_oriented_connection's read_loop, ignoring: {"code":0,"name":"exception","message":"unspecified","stack":[{"context":{"level":"warn","file":"message_oriented_connection.cpp","line":201,"method":"read_loop","hostname":"","thread_name":"p2p","timestamp":"2015-08-22T15:40:59"},"format":"disconnected: ${e}","data":{"e":"0 exception: unspecified\nBad file descriptor \n {\"message\":\"Bad file descriptor\"}\n asio asio.cpp:37 operator()\n\n {\"len\":16}\n p2p stcp_socket.cpp:111 readsome"}}]} message_oriented_connection.cpp:293
2015-08-22T15:40:59 p2p:delayed_peer_deletion_task destroy_connection ] in destroy_connection() for message_oriented_connection.cpp:280
2015-08-22T15:40:59 p2p:delayed_peer_deletion_task destroy_connection ] Exception thrown while canceling message_oriented_connection's read_loop, ignoring: {"code":0,"name":"exception","message":"unspecified","stack":[{"context":{"level":"warn","file":"message_oriented_connection.cpp","line":201,"method":"read_loop","hostname":"","thread_name":"p2p","timestamp":"2015-08-22T15:40:59"},"format":"disconnected: ${e}","data":{"e":"0 exception: unspecified\nBad file descriptor \n {\"message\":\"Bad file descriptor\"}\n asio asio.cpp:37 operator()\n\n {\"len\":16}\n p2p stcp_socket.cpp:111 readsome"}}]} message_oriented_connection.cpp:293
2015-08-22T15:40:59 p2p:delayed_peer_deletion_task delayed_peer_deletio ] leaving delayed_peer_deletion_task node.cpp:1501
2015-08-22T15:41:08 p2p:p2p_network_connect_loop p2p_network_connect_ ] Starting an iteration of p2p_network_connect_loop(). node.cpp:881
2015-08-22T15:41:08 p2p:p2p_network_connect_loop display_current_conn ] Currently have 0 of [20/200] connections node.cpp:1625
2015-08-22T15:41:08 p2p:p2p_network_connect_loop display_current_conn ] my id is 79f5f5a4305f4496e2cb1e3f2dd48b4031d1b535bf391c91d7279545a87ed10b06 node.cpp:1626
2015-08-22T15:41:08 p2p:p2p_network_connect_loop display_current_conn ] Currently have 0 of [20/200] connections node.cpp:1625
2015-08-22T15:41:08 p2p:p2p_network_connect_loop display_current_conn ] my id is 79f5f5a4305f4496e2cb1e3f2dd48b4031d1b535bf391c91d7279545a87ed10b06
You can get the info on the current testnet here afaik: https://github.com/cryptonomex/graphene/releases/tag/test1
The original conversation is a few pages back here: https://bitsharestalk.org/index.php/topic,17962.330.html
On my side it's 4~5GB/h for one node. Hadn't had time to tweak the settings. Just mounted a larger partition, will try.Run out of disk space :(
p2p.log too large.
Same here, p2p.log grows like 1Gb/h
have you tried setting log level from debug to info? I'll try it now.
On my side it's 4~5GB/h for one node. Hadn't had time to tweak the settings. Just mounted a larger partition, will try.Run out of disk space :(
p2p.log too large.
Same here, p2p.log grows like 1Gb/h
have you tried setting log level from debug to info? I'll try it now.
107M Aug 22 16:30 p2p.log
103M Aug 22 06:59 p2p.log.20150822T100000
183M Aug 22 07:59 p2p.log.20150822T110000
116M Aug 22 08:59 p2p.log.20150822T120000
101M Aug 22 09:59 p2p.log.20150822T130000
182M Aug 22 10:59 p2p.log.20150822T140000
137M Aug 22 11:59 p2p.log.20150822T150000
243M Aug 22 13:59 p2p.log.20150822T170000
390M Aug 22 14:59 p2p.log.20150822T180000
293M Aug 22 15:59 p2p.log.20150822T190000
107M Aug 22 16:30 p2p.log.20150822T200000
You can get the info on the current testnet here afaik: https://github.com/cryptonomex/graphene/releases/tag/test1
Thanks.
git checkout test1You can get the info on the current testnet here afaik: https://github.com/cryptonomex/graphene/releases/tag/test1
Thanks.
I got a few more things figured out, making another attempt to join in the fun.
Is the above where most are testing now? Where do you need an extra set of dev eyes to focus?
Are the download links on that github page the best way to get the code to compile, or is that another method to download a tagged commit from github similar to "git clone https://github.com/cryptonomex/graphene.git" only using a tag reference?
git checkout test1You can get the info on the current testnet here afaik: https://github.com/cryptonomex/graphene/releases/tag/test1
Thanks.
I got a few more things figured out, making another attempt to join in the fun.
Is the above where most are testing now? Where do you need an extra set of dev eyes to focus?
Are the download links on that github page the best way to get the code to compile, or is that another method to download a tagged commit from github similar to "git clone https://github.com/cryptonomex/graphene.git" only using a tag reference?
if [ ! -d "graphene" ]; then
echo_msg "building bitshares graphene toolkit.."
git clone https://github.com/cryptonomex/graphene.git
cd graphene
git submodule update --init --recursive
cmake .
make
cd ~/bts2.0
fi
#if [ ! -d "graphene" ]; then
echo_msg "building bitshares graphene toolkit.."
# git clone https://github.com/cryptonomex/graphene.git
cd graphene
git checkout test1
git submodule update --init --recursive
cmake .
make
cd ~/bts2.0
#fi
I havent used the vagrant script myself. git checkout test1 is not a replacement for git clone, it is used after git clone and before git submodule update.git checkout test1You can get the info on the current testnet here afaik: https://github.com/cryptonomex/graphene/releases/tag/test1
Thanks.
I got a few more things figured out, making another attempt to join in the fun.
Is the above where most are testing now? Where do you need an extra set of dev eyes to focus?
Are the download links on that github page the best way to get the code to compile, or is that another method to download a tagged commit from github similar to "git clone https://github.com/cryptonomex/graphene.git" only using a tag reference?
Dog gone it puppies, not enough info. You tried to answer 1 question, but neglected to say if test1 is what I should be trying to run.
Remember, I'm not a git expert. I am using a slight variation of the Vagrant script to build, which is in part:Code: [Select]if [ ! -d "graphene" ]; then
echo_msg "building bitshares graphene toolkit.."
git clone https://github.com/cryptonomex/graphene.git
cd graphene
git submodule update --init --recursive
cmake .
make
cd ~/bts2.0
fi
Changed it to:Code: [Select]#if [ ! -d "graphene" ]; then
echo_msg "building bitshares graphene toolkit.."
# git clone https://github.com/cryptonomex/graphene.git
cd graphene
git checkout test1
git submodule update --init --recursive
cmake .
make
cd ~/bts2.0
#fi
which produces this error: pathspec 'test1' did not match any file(s) known to git.
Please help...
#if [ ! -d "graphene" ]; then
echo_msg "building bitshares graphene toolkit.."
git clone https://github.com/cryptonomex/graphene.git
cd graphene
git checkout test1
git submodule update --init --recursive
cmake .
make
cd ~/bts2.0
#fi
git clone https://github.com/cryptonomex/graphene.git
cd graphene
git checkout test1
git submodule update --init --recursive
cmake .
make
10 assert_exception: Assert Exception
operations.size() > 0: A transaction must have at least one operation
{"trx":{"ref_block_num":0,"ref_block_prefix":0,"expiration":"1970-01-01T00:00:00","operations":[],"extensions":[]}}
th_a transaction.cpp:51 validate
{"name_or_id":"the-ae"}
th_a wallet.cpp:2762 import_balance
if [ ! -d "graphene" ]; then
echo_msg "building bitshares graphene toolkit test1..."
git clone https://github.com/cryptonomex/graphene.git
cd graphene
git checkout test1
git submodule update --init --recursive
cmake .
make
cd ~/bts2.0/aug20
fi
Get this error:pretty sure that means that key has no balance.Code: [Select]10 assert_exception: Assert Exception
operations.size() > 0: A transaction must have at least one operation
{"trx":{"ref_block_num":0,"ref_block_prefix":0,"expiration":"1970-01-01T00:00:00","operations":[],"extensions":[]}}
th_a transaction.cpp:51 validate
{"name_or_id":"the-ae"}
th_a wallet.cpp:2762 import_balance
when I try to import balance...
docker run -v ~:/build sile16/graphene-build test1
For building you can also try my docker build box:
Example for target build directory of ~/grapheneCode: [Select]docker run -v ~:/build sile16/graphene-build test1
The source code for the build box can be inspected at:
https://github.com/sile16/bts2/tree/master/Docker/graphene-build (https://github.com/sile16/bts2/tree/master/Docker/graphene-build)
Also, can someone help me vote in my delegate everydaycrypto ? I don't have enough votes, thanks!
get_witness delegate.verbaltech
{
"id": "1.6.1621",
"witness_account": "1.2.22408",
"signing_key": "GPHxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
"next_secret_hash": "ghfghfghfghfhfghfghfghfghf",
"previous_secret": "0000000000000000000000000000000000000000",
"vote_id": "1:1621",
"total_votes": 0,
"url": ""
}
{"new_block":{"previous":"0002a82166d023fa7d649b09e5d81098fb7bf60b","timestamp":"2015-08-22T22:21:18","witness":"1.6.27","next_secret_hash":"22230db1a0563e301b74c59ef7e1ff1612bff7b7","previous_secret":"9eabe79eb0d03d3c8630c0886c58462516913db4","transaction_merkle_root":"0000000000000000000000000000000000000000","extensions":[],"witness_signature":"1f1109216757ffd893766ca55fd2e8b6b870ce120eb8ac0b8d2b5fe308bb33b5441a3d8e49d7de51cad7163d71febab7eb119bf6cc1c0ca6a2bf9dfd29e85f879a","transactions":[]}}
th_a db_block.cpp:176 _push_block
1279000ms th_a witness.cpp:240 block_production_loo ] slot: 66 scheduled_witness: 1.6.1526 scheduled_time: 2015-08-22T22:21:19 now: 2015-08-22T22:21:19
1279000ms th_a witness.cpp:243 block_production_loo ] Witness 1.6.1526 production slot has arrived; generating a block now...
Program received signal SIGSEGV, Segmentation fault.
0x0000000002851792 in SHA256_Update.part.0 ()
(gdb) bt
second_branch_itr != _index.get<block_id>().end():
{}
th_a fork_database.cpp:186 fetch_branch_from
{"first":"0002a822214f6038949b2585653866af1da863a3","second":"0002a7e8b9613cd334a36aeccdb1d32460d8fa53"}
th_a fork_database.cpp:217 fetch_branch_from
{"new_block":{"previous":"0002a82166d023fa7d649b09e5d81098fb7bf60b","timestamp":"2015-08-22T22:21:18","witness":"1.6.27","next_secret_hash":"22230db1a0563e301b74c59ef7e1ff1612bff7b7","previous_secret":"9eabe79eb0d03d3c8630c0886c58462516913db4","transaction_merkle_root":"0000000000000000000000000000000000000000","extensions":[],"witness_signature":"1f1109216757ffd893766ca55fd2e8b6b870ce120eb8ac0b8d2b5fe308bb33b5441a3d8e49d7de51cad7163d71febab7eb119bf6cc1c0ca6a2bf9dfd29e85f879a","transactions":[]}} th_a db_block.cpp:176 _push_block 1279000ms th_a witness.cpp:240 block_production_loo ] slot: 66 scheduled_witness: 1.6.1526 scheduled_time: 2015-08-22T22:21:19 now: 2015-08-22T22:21:19
1279000ms th_a witness.cpp:243 block_production_loo ] Witness 1.6.1526 production slot has arrived; generating a block now...
Program received signal SIGSEGV, Segmentation fault. 0x0000000002851792 in SHA256_Update.part.0 ()
(gdb) bt
#0 0x0000000002851792 in SHA256_Update.part.0 () #1 0x0000000002851b29 in SHA224_Update () #2 0x0000000002574131 in fc::sha224::encoder::write (this=0x7ffff585a220,
d=0x7ffff585a077 "\366\v", dlen=1)
at /home/user/src/graphene/libraries/fc/src/crypto/sha224.cpp:43
#3 0x00000000023c4c99 in fc::raw::pack<fc::sha224::encoder> (s=..., v=...)
at /home/user/src/graphene/libraries/fc/include/fc/io/raw.hpp:141
#4 0x00000000023c4f34 in fc::raw::detail::pack_object_visitor<fc::sha224::encoder, graphene::db::object_id<(unsigned char)1, (unsigned char)6, graphene::chain::witness_object> >::operator()<fc::unsigned_int, graphene::db::object_id<(unsigned char)1, (unsigned char)6, graphene::chain::witness_object>, &graphene::db::object_id<(unsigned char)1, (unsigned char)6, graphene::chain::witness_object>::instance> (this=0x7ffff585a0e0, name=0x29d9a9e "instance")
at /home/user/src/graphene/libraries/fc/include/fc/io/raw.hpp:250
#5 0x00000000023c4eba in fc::reflector<graphene::db::object_id<(unsigned char)1, (unsigned char)6, graphene::chain::witness_object> >::visit<fc::raw::detail::pack_object_visitor<fc::sha224::encoder, graphene::db::object_id<(unsigned char)1, (unsigned char)6, graphene::chain::witness_object> > > (
visitor=...)
at /home/user/src/graphene/libraries/db/include/graphene/db/object_id.hpp:143
#6 0x00000000023c4dbc in fc::raw::detail::if_enum<fc::false_type>::pack<fc::sha224::encoder, graphene::db::object_id<(unsigned char)1, (unsigned char)6, graphene::chain::witness_object> > (s=..., v=...)
at /home/user/src/graphene/libraries/fc/include/fc/io/raw.hpp:296
#7 0x00000000023c4c38 in fc::raw::detail::if_reflected<fc::true_type>::pack<fc::sha224::encoder, graphene::db::object_id<(unsigned char)1, (unsigned char---Type <return> to continue, or q <return> to quit---
)6, graphene::chain::witness_object> > (s=..., v=...)
at /home/user/src/graphene/libraries/fc/include/fc/io/raw.hpp:332
#8 0x00000000023c4adc in fc::raw::pack<fc::sha224::encoder, graphene::db::object_id<(unsigned char)1, (unsigned char)6, graphene::chain::witness_object> > (s=..., v=...)
at /home/user/src/graphene/libraries/fc/include/fc/io/raw.hpp:478
#9 0x00000000023c481b in fc::raw::detail::pack_object_visitor<fc::sha224::encoder, graphene::chain::signed_block_header>::operator()<graphene::db::object_id<(unsigned char)1, (unsigned char)6, graphene::chain::witness_object>, graphene::chain::block_header, &graphene::chain::block_header::witness> (
this=0x7ffff585a1b0, name=0x29d9a40 "witness")
at /home/user/src/graphene/libraries/fc/include/fc/io/raw.hpp:250
#10 0x00000000023c45a7 in fc::reflector<graphene::chain::block_header>::visit<fc::raw::detail::pack_object_visitor<fc::sha224::encoder, graphene::chain::signed_block_header> > (v=...)
at /home/user/src/graphene/libraries/chain/include/graphene/chain/protocol/block.hpp:56
#11 0x00000000023c42e1 in fc::reflector<graphene::chain::signed_block_header>::visit<fc::raw::detail::pack_object_visitor<fc::sha224::encoder, graphene::chain::signed_block_header> > (v=...)
at /home/user/src/graphene/libraries/chain/include/graphene/chain/protocol/block.hpp:58
#12 0x00000000023c40bc in fc::raw::detail::if_enum<fc::false_type>::pack<fc::sha224::encoder, graphene::chain::signed_block_header> (s=..., v=...)
at /home/user/src/graphene/libraries/fc/include/fc/io/raw.hpp:296
#13 0x00000000023c3e06 in fc::raw::detail::if_reflected<fc::true_type>::pack<fc::sha224::encoder, graphene::chain::signed_block_header> (s=..., v=...)
---Type <return> to continue, or q <return> to quit---
at /home/user/src/graphene/libraries/fc/include/fc/io/raw.hpp:332
#14 0x00000000023c3a62 in fc::raw::pack<fc::sha224::encoder, graphene::chain::signed_block_header> (s=..., v=...)
at /home/user/src/graphene/libraries/fc/include/fc/io/raw.hpp:478
#15 0x00000000023c37c4 in fc::sha224::hash<graphene::chain::signed_block_header> (t=...)
at /home/user/src/graphene/libraries/fc/include/fc/crypto/sha224.hpp:29
#16 0x00000000023c2f97 in graphene::chain::signed_block_header::id (
this=0x7ffff585a360)
at /home/user/src/graphene/libraries/chain/protocol/block.cpp:36
#17 0x000000000238b0ac in graphene::chain::fork_item::fork_item (
this=0x86a3858, d=...)
at /home/user/src/graphene/libraries/chain/include/graphene/chain/fork_database.hpp:35
#18 0x0000000002392226 in __gnu_cxx::new_allocator<graphene::chain::fork_item>::construct<graphene::chain::fork_item<graphene::chain::signed_block const&> > (this=0x7ffff585a54f, __p=0x86a3858)
at /usr/include/c++/4.8/ext/new_allocator.h:120
#19 0x0000000002392045 in std::allocator_traits<std::allocator<graphene::chain::fork_item> >::_S_construct<graphene::chain::fork_item<graphene::chain::signed_block const&> >(std::allocator<graphene::chain::fork_item>&, std::allocator_traits<std::allocator<graphene::chain::fork_item> >::__construct_helper*, (graphene::chain::fork_item<graphene::chain::signed_block const&>&&)...) (
__a=..., __p=0x86a3858) at /usr/include/c++/4.8/bits/alloc_traits.h:254
#20 0x0000000002391f9d in std::allocator_traits<std::allocator<graphene::chain::fork_item> >::construct<graphene::chain::fork_item<graphene::chain::signed_block const&> >(std::allocator<graphene::chain::fork_item>&, graphene::chain---Type <return> to continue, or q <return> to quit---
::fork_item<graphene::chain::signed_block const&>*, (graphene::chain::fork_item<graphene::chain::signed_block const&>&&)...) (__a=..., __p=0x86a3858)
at /usr/include/c++/4.8/bits/alloc_traits.h:393
#21 0x0000000002391bce in std::_Sp_counted_ptr_inplace<graphene::chain::fork_item, std::allocator<graphene::chain::fork_item>, (__gnu_cxx::_Lock_policy)2>::_Sp_counted_ptr_inplace<graphene::chain::signed_block const&> (
this=0x86a3840, __a=...)
at /usr/include/c++/4.8/bits/shared_ptr_base.h:399
#22 0x0000000002390f37 in __gnu_cxx::new_allocator<std::_Sp_counted_ptr_inplace<graphene::chain::fork_item, std::allocator<graphene::chain::fork_item>, (__gnu_cxx::_Lock_policy)2> >::construct<std::_Sp_counted_ptr_inplace<graphene::chain::fork_item, std::allocator<graphene::chain::fork_item>, (__gnu_cxx::_Lock_policy)2><std::allocator<graphene::chain::fork_item> const, graphene::chain::signed_block const&> > (this=0x7ffff585a627, __p=0x86a3840)
at /usr/include/c++/4.8/ext/new_allocator.h:120
#23 0x00000000023902dd in std::allocator_traits<std::allocator<std::_Sp_counted_ptr_inplace<graphene::chain::fork_item, std::allocator<graphene::chain::fork_item>, (__gnu_cxx::_Lock_policy)2> > >::_S_construct<std::_Sp_counted_ptr_inplace<graphene::chain::fork_item, std::allocator<graphene::chain::fork_item>, (__gnu_cxx::_Lock_policy)2><std::allocator<graphene::chain::fork_item> const, graphene::chain::signed_block const&> >(std::allocator<std::_Sp_counted_ptr_inplace<graphene::chain::fork_item, std::allocator<graphene::chain::fork_item>, (__gnu_cxx::_Lock_policy)2> >&, std::allocator_traits<std::allocator<std::_Sp_counted_ptr_inplace<graphene::chain::fork_item, std::allocator<graphene::chain::fork_item>, (__gnu_cxx::_Lock_policy)2> > >::__construct_helper*, (std::_Sp_counted_ptr_inplace<graphene::chain::fork_item, std::allocator<graphene::chain::fork_item>, (__gnu_cxx::_Lock_policy)2><std::allocator<graphene::---Type <return> to continue, or q <return> to quit---
chain::fork_item> const, graphene::chain::signed_block const&>&&)...) (
__a=..., __p=0x86a3840) at /usr/include/c++/4.8/bits/alloc_traits.h:254
#24 0x000000000238f9ae in std::allocator_traits<std::allocator<std::_Sp_counted_ptr_inplace<graphene::chain::fork_item, std::allocator<graphene::chain::fork_item>, (__gnu_cxx::_Lock_policy)2> > >::construct<std::_Sp_counted_ptr_inplace<graphene::chain::fork_item, std::allocator<graphene::chain::fork_item>, (__gnu_cxx::_Lock_policy)2><std::allocator<graphene::chain::fork_item> const, graphene::chain::signed_block const&> >(std::allocator<std::_Sp_counted_ptr_inplace<graphene::chain::fork_item, std::allocator<graphene::chain::fork_item>, (__gnu_cxx::_Lock_policy)2> >&, std::_Sp_counted_ptr_inplace<graphene::chain::fork_item, std::allocator<graphene::chain::fork_item>, (__gnu_cxx::_Lock_policy)2><std::allocator<graphene::chain::fork_item> const, graphene::chain::signed_block const&>*, (std::_Sp_counted_ptr_inplace<graphene::chain::fork_item, std::allocator<graphene::chain::fork_item>, (__gnu_cxx::_Lock_policy)2><std::allocator<graphene::chain::fork_item> const, graphene::chain::signed_block const&>&&)...) (__a=..., __p=0x86a3840)
at /usr/include/c++/4.8/bits/alloc_traits.h:393
#25 0x000000000238f29f in std::__shared_count<(__gnu_cxx::_Lock_policy)2>::__shared_count<graphene::chain::fork_item, std::allocator<graphene::chain::fork_item>, graphene::chain::signed_block const&> (this=0x7ffff585a808, __a=...)
at /usr/include/c++/4.8/bits/shared_ptr_base.h:502
#26 0x000000000238e832 in std::__shared_ptr<graphene::chain::fork_item, (__gnu_cxx::_Lock_policy)2>::__shared_ptr<std::allocator<graphene::chain::fork_item>, graphene::chain::signed_block const&> (this=0x7ffff585a800, __tag=...,
__a=...) at /usr/include/c++/4.8/bits/shared_ptr_base.h:957
#27 0x000000000238d76a in std::shared_ptr<graphene::chain::fork_item>::shared_ptr<std::allocator<graphene::chain::fork_item>, graphene::chain::signed_bloc---Type <return> to continue, or q <return> to quit---
k const&> (this=0x7ffff585a800, __tag=..., __a=...)
at /usr/include/c++/4.8/bits/shared_ptr.h:316
#28 0x000000000238c683 in std::allocate_shared<graphene::chain::fork_item, std::allocator<graphene::chain::fork_item>, graphene::chain::signed_block const&> (__a=...) at /usr/include/c++/4.8/bits/shared_ptr.h:598
#29 0x000000000238b9e3 in std::make_shared<graphene::chain::fork_item, graphene::chain::signed_block const&> ()
at /usr/include/c++/4.8/bits/shared_ptr.h:614
#30 0x0000000002387bd9 in graphene::chain::fork_database::push_block (
this=0x39b0200, b=...)
at /home/user/src/graphene/libraries/chain/fork_database.cpp:51
#31 0x00000000021d15ca in graphene::chain::database::_push_block (
this=0x39b0018, new_block=...)
at /home/user/src/graphene/libraries/chain/db_block.cpp:103
#32 0x00000000021d14e9 in graphene::chain::database::__lambda15::operator()
(__closure=0x7ffff585aba0)
at /home/user/src/graphene/libraries/chain/db_block.cpp:90
#33 0x00000000021f304f in graphene::chain::database::with_skip_flags<graphene::chain::database::push_block(const graphene::chain::signed_block&, uint32_t)::__lambda15>(uint32_t, graphene::chain::database::__lambda15) (
this=0x39b0018, skip_flags=0, callback=...)
at /home/user/src/graphene/libraries/chain/include/graphene/chain/database.hpp:285
#34 0x00000000021d1548 in graphene::chain::database::push_block (
this=0x39b0018, new_block=..., skip=0)
at /home/user/src/graphene/libraries/chain/db_block.cpp:91
#35 0x00000000021d498f in graphene::chain::database::_generate_block (
this=0x39b0018, when=..., witness_id=..., block_signing_private_key=...)
at /home/user/src/graphene/libraries/chain/db_block.cpp:299
#36 0x00000000021d4a7e in graphene::chain::database::_generate_block (
this=0x39b0018, when=..., witness_id=..., block_signing_private_key=...)
at /home/user/src/graphene/libraries/chain/db_block.cpp:312
#37 0x00000000021d4a7e in graphene::chain::database::_generate_block (
this=0x39b0018, when=..., witness_id=..., block_signing_private_key=...)
at /home/user/src/graphene/libraries/chain/db_block.cpp:312
#38 0x00000000021d4a7e in graphene::chain::database::_generate_block (
this=0x39b0018, when=..., witness_id=..., block_signing_private_key=...)
at /home/user/src/graphene/libraries/chain/db_block.cpp:312
#39 0x00000000021d4a7e in graphene::chain::database::_generate_block (
this=0x39b0018, when=..., witness_id=..., block_signing_private_key=...)
at /home/user/src/graphene/libraries/chain/db_block.cpp:312
#40 0x00000000021d4a7e in graphene::chain::database::_generate_block (
this=0x39b0018, when=..., witness_id=..., block_signing_private_key=...)
at /home/user/src/graphene/libraries/chain/db_block.cpp:312
#41 0x00000000021d4a7e in graphene::chain::database::_generate_block (
this=0x39b0018, when=..., witness_id=..., block_signing_private_key=...)
at /home/user/src/graphene/libraries/chain/db_block.cpp:312
#42 0x00000000021d4a7e in graphene::chain::database::_generate_block (
this=0x39b0018, when=..., witness_id=..., block_signing_private_key=...)
at /home/user/src/graphene/libraries/chain/db_block.cpp:312
#43 0x00000000021d4a7e in graphene::chain::database::_generate_block (
this=0x39b0018, when=..., witness_id=..., block_signing_private_key=...)
at /home/user/src/graphene/libraries/chain/db_block.cpp:312
#44 0x00000000021d4a7e in graphene::chain::database::_generate_block (
Dont know if this has already been found and fixed but my witness node died. I'll get it back up in a few minutes and vote those needing it.Code: [Select]{"new_block":{"previous":"0002a82166d023fa7d649b09e5d81098fb7bf60b","timestamp":"2015-08-22T22:21:18","witness":"1.6.27","next_secret_hash":"22230db1a0563e301b74c59ef7e1ff1612bff7b7","previous_secret":"9eabe79eb0d03d3c8630c0886c58462516913db4","transaction_merkle_root":"0000000000000000000000000000000000000000","extensions":[],"witness_signature":"1f1109216757ffd893766ca55fd2e8b6b870ce120eb8ac0b8d2b5fe308bb33b5441a3d8e49d7de51cad7163d71febab7eb119bf6cc1c0ca6a2bf9dfd29e85f879a","transactions":[]}}
th_a db_block.cpp:176 _push_block
1279000ms th_a witness.cpp:240 block_production_loo ] slot: 66 scheduled_witness: 1.6.1526 scheduled_time: 2015-08-22T22:21:19 now: 2015-08-22T22:21:19
1279000ms th_a witness.cpp:243 block_production_loo ] Witness 1.6.1526 production slot has arrived; generating a block now...
Program received signal SIGSEGV, Segmentation fault.
0x0000000002851792 in SHA256_Update.part.0 ()
(gdb) bt
second_branch_itr != _index.get<block_id>().end():
{}
th_a fork_database.cpp:186 fetch_branch_from
{"first":"0002a822214f6038949b2585653866af1da863a3","second":"0002a7e8b9613cd334a36aeccdb1d32460d8fa53"}
th_a fork_database.cpp:217 fetch_branch_from
{"new_block":{"previous":"0002a82166d023fa7d649b09e5d81098fb7bf60b","timestamp":"2015-08-22T22:21:18","witness":"1.6.27","next_secret_hash":"22230db1a0563e301b74c59ef7e1ff1612bff7b7","previous_secret":"9eabe79eb0d03d3c8630c0886c58462516913db4","transaction_merkle_root":"0000000000000000000000000000000000000000","extensions":[],"witness_signature":"1f1109216757ffd893766ca55fd2e8b6b870ce120eb8ac0b8d2b5fe308bb33b5441a3d8e49d7de51cad7163d71febab7eb119bf6cc1c0ca6a2bf9dfd29e85f879a","transactions":[]}} th_a db_block.cpp:176 _push_block 1279000ms th_a witness.cpp:240 block_production_loo ] slot: 66 scheduled_witness: 1.6.1526 scheduled_time: 2015-08-22T22:21:19 now: 2015-08-22T22:21:19
1279000ms th_a witness.cpp:243 block_production_loo ] Witness 1.6.1526 production slot has arrived; generating a block now...
Program received signal SIGSEGV, Segmentation fault. 0x0000000002851792 in SHA256_Update.part.0 ()
(gdb) bt
#0 0x0000000002851792 in SHA256_Update.part.0 () #1 0x0000000002851b29 in SHA224_Update () #2 0x0000000002574131 in fc::sha224::encoder::write (this=0x7ffff585a220,
d=0x7ffff585a077 "\366\v", dlen=1)
at /home/user/src/graphene/libraries/fc/src/crypto/sha224.cpp:43
#3 0x00000000023c4c99 in fc::raw::pack<fc::sha224::encoder> (s=..., v=...)
at /home/user/src/graphene/libraries/fc/include/fc/io/raw.hpp:141
#4 0x00000000023c4f34 in fc::raw::detail::pack_object_visitor<fc::sha224::encoder, graphene::db::object_id<(unsigned char)1, (unsigned char)6, graphene::chain::witness_object> >::operator()<fc::unsigned_int, graphene::db::object_id<(unsigned char)1, (unsigned char)6, graphene::chain::witness_object>, &graphene::db::object_id<(unsigned char)1, (unsigned char)6, graphene::chain::witness_object>::instance> (this=0x7ffff585a0e0, name=0x29d9a9e "instance")
at /home/user/src/graphene/libraries/fc/include/fc/io/raw.hpp:250
#5 0x00000000023c4eba in fc::reflector<graphene::db::object_id<(unsigned char)1, (unsigned char)6, graphene::chain::witness_object> >::visit<fc::raw::detail::pack_object_visitor<fc::sha224::encoder, graphene::db::object_id<(unsigned char)1, (unsigned char)6, graphene::chain::witness_object> > > (
visitor=...)
at /home/user/src/graphene/libraries/db/include/graphene/db/object_id.hpp:143
#6 0x00000000023c4dbc in fc::raw::detail::if_enum<fc::false_type>::pack<fc::sha224::encoder, graphene::db::object_id<(unsigned char)1, (unsigned char)6, graphene::chain::witness_object> > (s=..., v=...)
at /home/user/src/graphene/libraries/fc/include/fc/io/raw.hpp:296
#7 0x00000000023c4c38 in fc::raw::detail::if_reflected<fc::true_type>::pack<fc::sha224::encoder, graphene::db::object_id<(unsigned char)1, (unsigned char---Type <return> to continue, or q <return> to quit---
get_witness delegate.verbaltech
{
"id": "1.6.1621",
"witness_account": "1.2.22408",
"signing_key": "GPHxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
"next_secret_hash": "556678686970979675674876785---jhjbvth,
"previous_secret": "0000000000000000000000000000000000000000",
"vote_id": "1:1621",
"total_votes": 0, <---------------- will this change?
"url": ""
}
info
{
"head_block_num": 0,
"head_block_id": "0000000000000000000000000000000000000000",
"head_block_age": "5 days old",
"next_maintenance_time": "45 years ago", <----------------------------- Does this mean there will not be another one to apply the votes I cast????????
"chain_id": "d011922587473757011118587f93afcc314fbaea094fc1055574721b27975083",
Dang, shit keeps changing!See https://github.com/cryptonomex/graphene/releases
I had to restart the witness after I saw the witness-id for delegate.verbaltech changed. Is that b/c of the different genesis block? I'm using the aug-20th gen. blk now, with d011922587473757011118587f93afcc314fbaea094fc1055574721b27975083 and seed node=104.200.28.117:61705.
After that I had to go thru the rig-a-ma-role of importing the balance all over again. SO, if the above params are correct whenever that maintenance interval passes I should be voted in.
What's the api call to chk if I'm producing blocks?
will the total_votes go up in this api call after the maintenance interval passes? I just looked for dele-puppy and it's still shows total_votes as 0.Code: [Select]get_witness delegate.verbaltech
{
"id": "1.6.1621",
"witness_account": "1.2.22408",
"signing_key": "GPHxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
"next_secret_hash": "556678686970979675674876785---jhjbvth,
"previous_secret": "0000000000000000000000000000000000000000",
"vote_id": "1:1621",
"total_votes": 0, <---------------- will this change?
"url": ""
}
./witness_node -s "104.236.51.238:1776" --genesis-json aug-20-test-genesis.json
./cli_wallet --chain-id d011922587473757011118587f93afcc314fbaea094fc1055574721b27975083
I'm going to leave my witness node running until tomorrow, but I'n tired of waiting for puppies to vote me in or for the maintenance interval to pass that should do the same.You've been voted in. 8)
I'll have to try again Monday.
Good luck to your efforts puppies!
Thanks for the info abit. Did you vote me in or did you just observe it was so?You will stay voted in until we start a new chain, at which point you will need to import your keys again, and get voted in again.
If you haven't noticed by now I'm full of questions. Being voted in will last until the next test or code revision, or ? At which time one must run thru the process of balance import and getting voted in right?
I just tried to check the node to see if I could find evidence it was producing blocks or was voted in but could not figure out how.
I did a forum search for "voted in" and "producing blocks" and of all the messages that had those phrases NOT ONE gave an API call or method for checking those things. Nice communication there folks. Real nice :(
I understand you are frustrated with the information being spread out throughout this thread. I have been responding to this thread mostly from work, typing on a cellphone, and have not have the time nor the wherewithal to collect everything into a single post.
sudo apt-get update
Install gcc-49 etcsudo apt-get install gcc-4.9 g++-4.9 cmake make libbz2-dev libdb++-dev libdb-dev libssl-dev openssl libreadline-dev autoconf libtool git
If you cannot install gcc-4.9, you will need to add this repository before hand and try again.sudo add-apt-repository ppa:ubuntu-toolchain-r/test
BOOST_ROOT=$HOME/opt/boost_1_57_0
sudo apt-get update
sudo apt-get install autotools-dev build-essential g++ libbz2-dev libicu-dev python-dev
wget -c 'http://sourceforge.net/projects/boost/files/boost/1.57.0/boost_1_57_0.tar.bz2/download' -O boost_1_57_0.tar.bz2
[ $( sha256sum boost_1_57_0.tar.bz2 | cut -d ' ' -f 1 ) == "910c8c022a33ccec7f088bd65d4f14b466588dda94ba2124e78b8c57db264967" ] || ( echo 'Corrupt download' ; exit 1 )
tar xjf boost_1_57_0.tar.bz2
cd boost_1_57_0/
./bootstrap.sh "--prefix=$BOOST_ROOT"
./b2 install
BOOST_ROOT=$HOME/opt/boost_1_57_0
Check out and buildcd ~
git clone https://github.com/cryptonomex/graphene.git
cd graphene
git checkout test1
git submodule update --init --recursive
cmake -DBOOST_ROOT="$BOOST_ROOT" -DCMAKE_BUILD_TYPE=Debug .
make
cd ~/graphene/programs/witness_node
wget https://github.com/cryptonomex/graphene/releases/download/test1/aug-20-test-genesis.json
screen
5. Run the witness Current nodes for test 1 (replace for other tests) ./witness_node --rpc-endpoint "127.0.0.1:8090" --genesis-json aug-20-test-genesis.json -d test_net_1 -s "104.236.51.238:1776" -s "176.221.43.130:33323" -s "45.55.6.216:1776" -s "114.92.254.159:62015"
./witness_node --rpc-endpoint "127.0.0.1:8090" --genesis-json ~/graphene/programs/witness-node/aug-20-test-genesis.json -d test_net_1 -s "104.236.51.238:1776" -s "176.221.43.130:33323" -s "45.55.6.216:1776" -s "114.92.254.159:62015"
Ctrl A Ctrl D
7. Extract your wif keys for user and balances as per xeroc's instructions https://github.com/cryptonomex/graphene/wiki/Howto-become-an-active-witness-in-BitShares-2.0cd ~/graphene/programs/cli_wallet
9. Run cli Current chain id for test 1 ./cli_wallet -w test_wallet_puppies --chain-id d011922587473757011118587f93afcc314fbaea094fc1055574721b27975083
Note:screen -r
13. Exit your witness ctrl c./witness_node --rpc-endpoint "127.0.0.1:8090" --genesis-json aug-20-test-genesis.json -d test_net_1 -s "104.236.51.238:1776" -s "176.221.43.130:33323" -s "45.55.6.216:1776" -s "114.92.254.159:62015" --witness-id '"1.6.5156"' --private-key '["GPH6JhL..your.signing.key..bc5mWyCvERV3coy","5K..your.secret..a"]'
15. See your witness producing blocks and 14. Restart with parameters to start block producing (block producing needs your witness id and private keys) Current node abit's for test 1Thanks for sharing, but don't use my node(at least not only use my node), it breaks now and then.Code: [Select]./witness_node --rpc-endpoint "127.0.0.1:8090" --genesis-json aug-14-test-genesis.json -d test_net_1 -s 114.92.254.159:62015 --witness-id '"1.6.5156"' --private-key '["GPH6JhL..your.signing.key..bc5mWyCvERV3coy","5K..your.secret..a"]'
15. See your witness producing blocks and
you can Ctrl A Ctrl D to detach from screen.
./witness_node --rpc-endpoint "127.0.0.1:8090" --genesis-json aug-20-test-genesis.json -d test_net_1 -s "104.236.51.238:1776" -s "176.221.43.130:33323" -s "45.55.6.216:1776" -s "114.92.254.159:62015"
get_witness spartako
{
"id": "1.6.4231",
"witness_account": "1.2.72727",
ocked >>> get_witness spartako
get_witness spartako
{
"id": "1.6.4231",
"witness_account": "1.2.72727",
"signing_key": "GPH5mgup8evDqMnT86L7scVebRYDC2fwAWmygPEUL43LjstQegYCC",
"next_secret_hash": "7933c3cad5ce7cf05646dd36a4da6882fbd74255",
"previous_secret": "311664bc1ac2a3accdc31761f36835c9c985b1e2",
"pay_vb": "1.13.185",
"vote_id": "1:4231",
"total_votes": "64999598829",
"url": ""
That's odd spartako I seeCode: [Select]ocked >>> get_witness spartako
get_witness spartako
{
"id": "1.6.4231",
"witness_account": "1.2.72727",
"signing_key": "GPH5mgup8evDqMnT86L7scVebRYDC2fwAWmygPEUL43LjstQegYCC",
"next_secret_hash": "7933c3cad5ce7cf05646dd36a4da6882fbd74255",
"previous_secret": "311664bc1ac2a3accdc31761f36835c9c985b1e2",
"pay_vb": "1.13.185",
"vote_id": "1:4231",
"total_votes": "64999598829",
"url": ""
But I don't see you in info. I do see double representations of 1.6.1526 1530 1537 and 1595 so maybe it's a display issue.
In regards to checking witness stats. I don't think there is currently a way to see them directly. You can use the get_witness command to check that the next secret hash and previous secret are changing. Watch your witness node for your witness number or use the gui and go to the block explorer page.
Also great job betax, and thanks for the nodes abit.
if the --resync-blockchain is specified, does that prevent the witness form signing blocks? I coudn't get my witness to start signed by then tried removing that and it seemed to work. Also, if it has to resync from scratch (like empty folder) then it also doesn't seem to sign blocks. Not 100% sure but it seems I have to:I've seen that issue, but it's not 100%. For example dele-puppy is currently producing blocks and was launched with a --resync-blockchain. I seem to have noticed it more on my home networked boxes.
get the witiness all synced up.. then shutdown gracefully with a ^C (which doesn't always work btw, it sometimes says DB is corrupt and restarts from scratch) , then start it again , at which point it then starts to sign blocks.
Are existing 0.9.2 delegates automatically imported as a witness or is it required to upgrade and create witness object? I'm hoping to skip sorting through the hundreds of balance id's for a proper balance that also existed on the 20th.
Thank you betax, your guide was very helpful. +5%
Are existing 0.9.2 delegates automatically imported as a witness or is it required to upgrade and create witness object? I'm hoping to skip sorting through the hundreds of balance id's for a proper balance that also existed on the 20th.
Thank you betax, your guide was very helpful. +5%
All existing delegates are imported as witnesses. When you get your node set up let us know and we will vote you in.
seems I don't have my owner key imported into my vps. I won't be able to vote till after work. I'm sure someone else will vote you in.Are existing 0.9.2 delegates automatically imported as a witness or is it required to upgrade and create witness object? I'm hoping to skip sorting through the hundreds of balance id's for a proper balance that also existed on the 20th.
Thank you betax, your guide was very helpful. +5%
All existing delegates are imported as witnesses. When you get your node set up let us know and we will vote you in.
I believe I'm up now. ID: 1.6.1624
0 exception: unspecified
3030001 tx_missing_active_auth: missing required active authority
Missing Active Authority 1.2.63353
You may need to import the ACTIVE_KEY too!seems I don't have my owner key imported into my vps. I won't be able to vote till after work. I'm sure someone else will vote you in.Are existing 0.9.2 delegates automatically imported as a witness or is it required to upgrade and create witness object? I'm hoping to skip sorting through the hundreds of balance id's for a proper balance that also existed on the 20th.
Thank you betax, your guide was very helpful. +5%
All existing delegates are imported as witnesses. When you get your node set up let us know and we will vote you in.
I believe I'm up now. ID: 1.6.1624Code: [Select]0 exception: unspecified
3030001 tx_missing_active_auth: missing required active authority
Missing Active Authority 1.2.63353
cmake .
instead of cmake -DCMAKE_BUILD_TYPE=Debug
but that didn't help. Also if it is helpful to anyone the loop is while sleep 1800;do rm p2p.log.*; done
and i just have it running in the p2p log folder to delete the archived logs every half hour or so.
Has anyone found a way to reduce the log size? Right now I have a loop running to delete the old logs, but I imagine there must be a setting somewhere. I have triedCode: [Select]cmake .
instead ofCode: [Select]cmake -DCMAKE_BUILD_TYPE=Debug
but that didn't help. Also if it is helpful to anyone the loop isCode: [Select]while sleep 1800;do rm p2p.log.*; done
and i just have it running in the p2p log folder to delete the archived logs every half hour or so.
On my side it's 4~5GB/h for one node. Hadn't had time to tweak the settings. Just mounted a larger partition, will try.Run out of disk space :(
p2p.log too large.
Same here, p2p.log grows like 1Gb/h
have you tried setting log level from debug to info? I'll try it now.
setting log level from debug to info for [logger.p2p] reduced its size about an order of magnitude.Code: [Select]107M Aug 22 16:30 p2p.log
103M Aug 22 06:59 p2p.log.20150822T100000
183M Aug 22 07:59 p2p.log.20150822T110000
116M Aug 22 08:59 p2p.log.20150822T120000
101M Aug 22 09:59 p2p.log.20150822T130000
182M Aug 22 10:59 p2p.log.20150822T140000
137M Aug 22 11:59 p2p.log.20150822T150000
243M Aug 22 13:59 p2p.log.20150822T170000
390M Aug 22 14:59 p2p.log.20150822T180000
293M Aug 22 15:59 p2p.log.20150822T190000
107M Aug 22 16:30 p2p.log.20150822T200000
1249999ms th_a witness.cpp:243 block_production_loo ] Witness 1.6.4231 production slot has arrived; generating a block now...
1250000ms th_a witness.cpp:266 block_production_loo ] Got exception while generating block:
10 assert_exception: Assert Exception
_consecutive_production_enabled || db.get_dynamic_global_properties().current_witness != scheduled_witness: Last block was generated by the same witness, this node is probably disconnect
ed from the network so block production has been disabled. Disable this check with --allow-consecutive option.
{}
th_a witness.cpp:248 block_production_loop
14. Restart with parameters to start block producing (block producing needs your witness id and private keys) Current node abit's for test 1Thanks for sharing, but don't use my node(at least not only use my node), it breaks now and then.Code: [Select]./witness_node --rpc-endpoint "127.0.0.1:8090" --genesis-json aug-14-test-genesis.json -d test_net_1 -s 114.92.254.159:62015 --witness-id '"1.6.5156"' --private-key '["GPH6JhL..your.signing.key..bc5mWyCvERV3coy","5K..your.secret..a"]'
15. See your witness producing blocks and
you can Ctrl A Ctrl D to detach from screen.
And the genesis file in step 14 should be aug-20-test-genesis.json.
Here is a list of nodes I'm using (still breaks sometimes):Code: [Select]./witness_node --rpc-endpoint "127.0.0.1:8090" --genesis-json aug-20-test-genesis.json -d test_net_1 -s "104.236.51.238:1776" -s "176.221.43.130:33323" -s "45.55.6.216:1776" -s "114.92.254.159:62015"
@betax: could you put your tutorial into the github wiki please? If not, may I do it for you?
I have found this error in my witness:Looks like you're on a fork.. Try restart, if it happen again try restart with --resync-blockchain.Code: [Select]1249999ms th_a witness.cpp:243 block_production_loo ] Witness 1.6.4231 production slot has arrived; generating a block now...
1250000ms th_a witness.cpp:266 block_production_loo ] Got exception while generating block:
10 assert_exception: Assert Exception
_consecutive_production_enabled || db.get_dynamic_global_properties().current_witness != scheduled_witness: Last block was generated by the same witness, this node is probably disconnect
ed from the network so block production has been disabled. Disable this check with --allow-consecutive option.
{}
th_a witness.cpp:248 block_production_loop
Is it related to p2p problem?
-s "104.236.51.238:1776" -s "176.221.43.130:33323" -s "45.55.6.216:1776" -s "114.92.254.159:62015"It seems that the seed nodes are all dead. If anyone know any good node please post here. Thanks.
./witness_node --p2p-endpoint "0.0.0.0:A_FIXED_PORT" --rpc-endpoint "127.0.0.1:8090" --genesis-json aug-20-test-genesis.json -d test_net_1 -s "104.236.51.238:1776" -s "176.221.43.130:33323" -s "45.55.6.216:1776" -s "114.92.254.159:62015"
}
unlocked >>> info
info
{
"head_block_num": 245425,
"head_block_id": "0003beb1a638ed50d318117d014424d44c2b2360",
"head_block_age": "0 second old",
"next_maintenance_time": "5 minutes in the future",
"chain_id": "d011922587473757011118587f93afcc314fbaea094fc1055574721b27975083",
"active_witnesses":
45.55.6.216:1776 is still upCode: [Select]}
unlocked >>> info
info
{
"head_block_num": 245425,
"head_block_id": "0003beb1a638ed50d318117d014424d44c2b2360",
"head_block_age": "0 second old",
"next_maintenance_time": "5 minutes in the future",
"chain_id": "d011922587473757011118587f93afcc314fbaea094fc1055574721b27975083",
"active_witnesses":
2015-08-23T21:27:10 p2p:terminate_inactive_connections_loop terminate_inactive_c ] Forcibly disconnecting from handshaking peer 45.55.6.216:1776 due to inactivity of at least 5 seconds node.cpp:1270
2015-08-23T21:27:10 p2p:connect_to_task connect_to ] fatal: error connecting to peer 45.55.6.216:1776: 0 exception: unspecified
grep inbound p2p.log|grep active
grep onbound p2p.log|grep active
-s "104.236.51.238:1776" -s "176.221.43.130:33323" -s "45.55.6.216:1776" -s "114.92.254.159:62015" -s 104.156.226.183:60715 -s 104.156.226.183:40479 -s 104.236.255.53:52995 -s 176.9.234.167:34858 -s 176.9.234.167:57727 -s 178.62.88.151:59148 -s 178.62.88.151:41574 -s 188.226.252.109:58843 -s 45.115.36.171:57281 -s 45.55.6.216:37308
2015-08-23T21:00:07 p2p:p2p_network_connect_loop display_current_conn ] Currently have 8 of [20/200] connections $2015-08-23T21:00:07 p2p:p2p_network_connect_loop display_current_conn ] my id is 9ba089ffa726097f30b96f85510d79f6e4c2a53af47$2015-08-23T21:00:07 p2p:p2p_network_connect_loop display_current_conn ] active: 178.62.88.151:59148 with 00bdfa3f396cf7$2015-08-23T21:00:07 p2p:p2p_network_connect_loop display_current_conn ] active: 178.62.88.151:41574 with 5d33f74a83e257$2015-08-23T21:00:07 p2p:p2p_network_connect_loop display_current_conn ] active: 176.9.234.167:34858 with 86a353604829f9$2015-08-23T21:00:07 p2p:p2p_network_connect_loop display_current_conn ] active: 104.156.226.183:60715 with 24f8ce11c3bd$2015-08-23T21:00:07 p2p:p2p_network_connect_loop display_current_conn ] active: 45.115.36.171:57281 with 256a70048ea155$2015-08-23T21:00:07 p2p:p2p_network_connect_loop display_current_conn ] active: 188.226.252.109:58843 with 7ea834eafbcd$2015-08-23T21:00:07 p2p:p2p_network_connect_loop display_current_conn ] active: 216.252.204.69:65026 with e8f88c156e162$2015-08-23T21:00:07 p2p:p2p_network_connect_loop display_current_conn ] active: 128.68.89.226:55445 with 8b9a48d3943899$2015-08-23T21:00:07 p2p:p2p_network_connect_loop connect_to_endpoint ] node_impl::connect_to_endpoint(178.62.91.161:2009) $2015-08-23T21:00:07 p2p:p2p_network_connect_loop connect_to_endpoint ] node_impl::connect_to_endpoint(46.226.109.66:1778) $2015-08-23T21:00:07 p2p:p2p_network_connect_loop connect_to_endpoint ] node_impl::connect_to_endpoint(5.189.131.201:2009) $2015-08-23T21:00:07 p2p:p2p_network_connect_loop connect_to_endpoint ] node_impl::connect_to_endpoint(46.226.15.26:40706) $2015-08-23T21:00:07 p2p:p2p_network_connect_loop connect_to_endpoint ] node_impl::connect_to_endpoint(46.226.12.230:2009) $2015-08-23T21:00:07 p2p:p2p_network_connect_loop connect_to_endpoint ] node_impl::connect_to_endpoint(185.25.22.21:2776) $2015-08-23T21:00:07 p2p:p2p_network_connect_loop connect_to_endpoint ] node_impl::connect_to_endpoint(69.172.229.183:2883) $2015-08-23T21:00:07 p2p:p2p_network_connect_loop connect_to_endpoint ] node_impl::connect_to_endpoint(66.172.11.223:1700) $
I'm at work and only have access to my vps through my phone. Corporate vpn blocks port 22. This is what I've gotOK, I've checked them. You have 8 connections, the first 6 are in my seed node list, the last 2 are incoming.Code: [Select]2015-08-23T21:00:07 p2p:p2p_network_connect_loop display_current_conn ] Currently have 8 of [20/200] connections $2015-08-23T21:00:07 p2p:p2p_network_connect_loop display_current_conn ] my id is 9ba089ffa726097f30b96f85510d79f6e4c2a53af47$2015-08-23T21:00:07 p2p:p2p_network_connect_loop display_current_conn ] active: 178.62.88.151:59148 with 00bdfa3f396cf7$2015-08-23T21:00:07 p2p:p2p_network_connect_loop display_current_conn ] active: 178.62.88.151:41574 with 5d33f74a83e257$2015-08-23T21:00:07 p2p:p2p_network_connect_loop display_current_conn ] active: 176.9.234.167:34858 with 86a353604829f9$2015-08-23T21:00:07 p2p:p2p_network_connect_loop display_current_conn ] active: 104.156.226.183:60715 with 24f8ce11c3bd$2015-08-23T21:00:07 p2p:p2p_network_connect_loop display_current_conn ] active: 45.115.36.171:57281 with 256a70048ea155$2015-08-23T21:00:07 p2p:p2p_network_connect_loop display_current_conn ] active: 188.226.252.109:58843 with 7ea834eafbcd$2015-08-23T21:00:07 p2p:p2p_network_connect_loop display_current_conn ] active: 216.252.204.69:65026 with e8f88c156e162$2015-08-23T21:00:07 p2p:p2p_network_connect_loop display_current_conn ] active: 128.68.89.226:55445 with 8b9a48d3943899$2015-08-23T21:00:07 p2p:p2p_network_connect_loop connect_to_endpoint ] node_impl::connect_to_endpoint(178.62.91.161:2009) $2015-08-23T21:00:07 p2p:p2p_network_connect_loop connect_to_endpoint ] node_impl::connect_to_endpoint(46.226.109.66:1778) $2015-08-23T21:00:07 p2p:p2p_network_connect_loop connect_to_endpoint ] node_impl::connect_to_endpoint(5.189.131.201:2009) $2015-08-23T21:00:07 p2p:p2p_network_connect_loop connect_to_endpoint ] node_impl::connect_to_endpoint(46.226.15.26:40706) $2015-08-23T21:00:07 p2p:p2p_network_connect_loop connect_to_endpoint ] node_impl::connect_to_endpoint(46.226.12.230:2009) $2015-08-23T21:00:07 p2p:p2p_network_connect_loop connect_to_endpoint ] node_impl::connect_to_endpoint(185.25.22.21:2776) $2015-08-23T21:00:07 p2p:p2p_network_connect_loop connect_to_endpoint ] node_impl::connect_to_endpoint(69.172.229.183:2883) $2015-08-23T21:00:07 p2p:p2p_network_connect_loop connect_to_endpoint ] node_impl::connect_to_endpoint(66.172.11.223:1700) $
And apparently juicessh saves the output all on one line. Scroll to the right or quote of its easier.
p2p.log.20150822T220000:2015-08-22T22:04:04 p2p:message read_loop disconnect_from_peer ] Disconnecting from 178.62.91.161:2009 for You are on a different chain from me node.cpp:4493
I have found this error in my witness:Looks like you're on a fork.. Try restart, if it happen again try restart with --resync-blockchain.Code: [Select]1249999ms th_a witness.cpp:243 block_production_loo ] Witness 1.6.4231 production slot has arrived; generating a block now...
1250000ms th_a witness.cpp:266 block_production_loo ] Got exception while generating block:
10 assert_exception: Assert Exception
_consecutive_production_enabled || db.get_dynamic_global_properties().current_witness != scheduled_witness: Last block was generated by the same witness, this node is probably disconnect
ed from the network so block production has been disabled. Disable this check with --allow-consecutive option.
{}
th_a witness.cpp:248 block_production_loop
Is it related to p2p problem?
that's pretty funny.
The latter 'connect_to_endpoint' ones seem to be 'bitshares_client' but not 'graphene'.Code: [Select]p2p.log.20150822T220000:2015-08-22T22:04:04 p2p:message read_loop disconnect_from_peer ] Disconnecting from 178.62.91.161:2009 for You are on a different chain from me node.cpp:4493
that's pretty funny.
The latter 'connect_to_endpoint' ones seem to be 'bitshares_client' but not 'graphene'.Code: [Select]2015-08-24T00:49:39 p2p:message read_loop on_hello_message ] Received hello message from peer on a different chain: {"user_agent":"bitshares_client","core_protocol_version":106,"inbound_address":"104.255.221.100","inbound_port":1776,"outbound_port":1776,"node_public_key":"02d094583cd49e6c1c86d0bad51064c179885989aeedef46494719bc64261aca7c","signed_shared_secret":"20511eb5127270ca43e2324352c0feee15093f0c6aa0e0cf6dd7ca2b921cdef5664ccb314e6bc9b369dcf78d5532090604f4458d76471eef26175aa979e994eab9","chain_id":"bbf8cbb90532eb555f66602d3bf071609552f852cf9156d4253a33479b70a5e1","user_data":{"bitshares_git_revision_sha":"6bc091e807ba3904ee6793e58d7de3b4309ff4b2","bitshares_git_revision_unix_timestamp":1437477775,"fc_git_revision_sha":"fd4fc4f0cb21fc7b631ee2be827f6aea85e040d6","fc_git_revision_unix_timestamp":1425136084,"platform":"linux","bitness":64,"node_id":"1206c3bec35c9347110ac6ee161c493095f69655cb854dbf6b91b41bf8cc495eab","last_known_block_hash":"c9680afa36aa6061c23919d71a3a221576ecf722","last_known_block_number":1251689,"last_known_block_time":"2015-08-24T00:49:30"}} node.cpp:1835
2015-08-24T00:49:39 p2p:message read_loop on_hello_message ] Received hello message from peer on a different chain: {"user_agent":
"bitshares_client","core_protocol_version":106,"inbound_address":"104.255.221.100","inbound_port":1776,"outbound_port":1776,"node_publ
ic_key":"02d094583cd49e6c1c86d0bad51064c179885989aeedef46494719bc64261aca7c","signed_shared_secret":"20511eb5127270ca43e2324352c0feee1
5093f0c6aa0e0cf6dd7ca2b921cdef5664ccb314e6bc9b369dcf78d5532090604f4458d76471eef26175aa979e994eab9","chain_id":"bbf8cbb90532eb555f66602
d3bf071609552f852cf9156d4253a33479b70a5e1","user_data":{"bitshares_git_revision_sha":"6bc091e807ba3904ee6793e58d7de3b4309ff4b2","bitsh
ares_git_revision_unix_timestamp":1437477775,"fc_git_revision_sha":"fd4fc4f0cb21fc7b631ee2be827f6aea85e040d6","fc_git_revision_unix_ti
mestamp":1425136084,"platform":"linux","bitness":64,"node_id":"1206c3bec35c9347110ac6ee161c493095f69655cb854dbf6b91b41bf8cc495eab","la
st_known_block_hash":"c9680afa36aa6061c23919d71a3a221576ecf722","last_known_block_number":1251689,"last_known_block_time":"2015-08-24T
00:49:30"}} node.cpp:1835
2015-08-24T00:49:39 p2p:message read_loop disconnect_from_peer ] Disconnecting from 104.255.221.100:1776 for You are on a different ch
ain from me node.cpp:4493
2015-08-24T00:49:39 p2p:message read_loop on_connection_reject ] Received a rejection from 104.255.221.100:1776 in response to my "hello", reason: "You're on a different chain than I am. I'm on bbf8cbb90532eb555f66602d3bf071609552f852cf9156d4253a33479b70a5e1 and you're on d011922587473757011118587f93afcc314fbaea094fc1055574721b27975083" node.cpp:2017
Any idea why my node would not be handshaking if it only had 8 peers?Don't know.. Maybe a bug.
3508896ms th_a application.cpp:265 startup ] Detected unclean shutdown. Replaying blockchain...
It takes too much time to replay blockchain.. How to "cleanly" shutdown the witness node?
Code: [Select]3508896ms th_a application.cpp:265 startup ] Detected unclean shutdown. Replaying blockchain...
It takes too much time to replay blockchain.. How to "cleanly" shutdown the witness node?
Code: [Select]3508896ms th_a application.cpp:265 startup ] Detected unclean shutdown. Replaying blockchain...
It takes too much time to replay blockchain.. How to "cleanly" shutdown the witness node?
Question bump.
Code: [Select]3508896ms th_a application.cpp:265 startup ] Detected unclean shutdown. Replaying blockchain...
It takes too much time to replay blockchain.. How to "cleanly" shutdown the witness node?
Question bump.
I use this strategy:
When I am synced, I shutdown with C-c (it should shutdown in a clean way) and I copy the blockchain folder as backup (if it shutdown without errors).
Everytime my blockchain is corrupted, I remove the blockchain folder and I copy with the backup one and restart the witness.
Finally I backup the blockchain folder every day.
I have added a new section Tips on the wiki.
https://github.com/cryptonomex/graphene/wiki/How-to-setup-your-witness-for-test-net-(Ubuntu-14.04) (https://github.com/cryptonomex/graphene/wiki/How-to-setup-your-witness-for-test-net-(Ubuntu-14.04))
So if you have any tips, you can add them there if is not general setup. I have just added spartako's from the previous post.
Also I have put some credits to list everyone :)
In the interest of not slipping the release date we are going to fall back to 3 or 5 second blocks using the current P2P code. Then after we update the P2P code we can increase the block rate to 2 and ultimately 1 second block times.Sounds interesting.
Ben is in the process of preparing instructions for committee members on how to change the block interval and we plan to dynamically update the test network to prove that we can do this on a live network.
In the interest of not slipping the release date we are going to fall back to 3 or 5 second blocks using the current P2P code. Then after we update the P2P code we can increase the block rate to 2 and ultimately 1 second block times.
Ben is in the process of preparing instructions for committee members on how to change the block interval and we plan to dynamically update the test network to prove that we can do this on a live network.
Thanks, it works. +5%Code: [Select]3508896ms th_a application.cpp:265 startup ] Detected unclean shutdown. Replaying blockchain...
It takes too much time to replay blockchain.. How to "cleanly" shutdown the witness node?
Question bump.
I use this strategy:
When I am synced, I shutdown with C-c (it should shutdown in a clean way) and I copy the blockchain folder as backup (if it shutdown without errors).
Everytime my blockchain is corrupted, I remove the blockchain folder and I copy with the backup one and restart the witness.
Finally I backup the blockchain folder every day.
I have another problem, when I restart witness_node, usually it takes a long time to get connected to network, and sometimes won't get connected at all.
77061ms th_a witness.cpp:243 block_production_loo ] Witness 1.6.1621 production slot has arrived; generating a block now...
./witness: line 5: 23013 Segmentation fault ./witness_node --resync-blockchain -d test_net --enable-stale-production
deletech@Jessie:~/bts2.0/aug20$ ./witness
1247707ms th_a main.cpp:112 main ] Error parsing logging config from config file /home/deletech/bts2.0/aug20/graphene/programs/witness_node/test_net/config.ini, using default config
1247707ms th_a witness.cpp:70 plugin_initialize ] key_id_to_wif_pair: ["GPH........................","5.................................."]
1247707ms th_a application.cpp:265 startup ] Detected unclean shutdown. Replaying blockchain...
1247707ms th_a application.cpp:228 operator() ] Initializing database...
1268250ms th_a db_management.cpp:67 wipe ] Wiping database
1268295ms th_a object_database.cpp:82 wipe ] Wiping object_database.
1283059ms th_a market_history_plugin.cpp:77 operator() ] processing {"fee":{"amount":0,"asset_id":"1.3.527"},"order_id":"1.7.3","account_id":"1.2.2994","pays":{"amount":10000000,"asset_id":"1.3.0"},"receives":{"amount":4000,"asset_id":"1.3.527"}}
1283059ms th_a market_history_plugin.cpp:123 operator() ] creating bucket {"id":"5.1.0","key":{"base":"1.3.0","quote":"1.3.527","seconds":15,"open":"2015-08-20T18:28:00"},"high_base":10000000,"high_quote":4000,"low_base":10000000,"low_quote":4000,"open_base":10000000,"open_quote":4000,"close_base":10000000,"close_quote":4000,"base_volume":10000000,"quote_volume":4000}
1283059ms th_a market_history_plugin.cpp:127 operator() ] before updating bucket {"id":"5.1.0","key":{"base":"1.3.0","quote":"1.3.527","seconds":15,"open":"2015-08-20T18:28:00"},"high_base":10000000,"high_quote":4000,"low_base":10000000,"low_quote":4000,"open_base":10000000,"open_quote":4000,"close_base":10000000,"close_quote":4000,"base_volume":10000000,"quote_volume":4000}
1283059ms th_a market_history_plugin.cpp:144 operator() ] after bucket bucket {"id":"5.1.0","key":{"base":"1.3.0","
1621853ms th_a main.cpp:112 main ] Error parsing logging config from config file /home/deletech/bts2.0/aug20/graphene/programs/witness_node/test_net/config.ini, using default config
1458062ms th_a witness.cpp:240 block_production_loo ] slot: 593448 scheduled_witness: 1.6.3 scheduled_time: 2015-08-24T17:24:18 now: 2015-08-24T17:24:18
1459048ms th_a witness.cpp:240 block_production_loo ] slot: 593449 scheduled_witness: 1.6.83 scheduled_time: 2015-08-24T17:24:19 now: 2015-08-24T17:24:19
1460043ms th_a witness.cpp:240 block_production_loo ] slot: 593450 scheduled_witness: 1.6.73 scheduled_time: 2015-08-24T17:24:20 now: 2015-08-24T17:24:20
1461044ms th_a witness.cpp:240 block_production_loo ] slot: 593451 scheduled_witness: 1.6.23 scheduled_time: 2015-08-24T17:24:21 now: 2015-08-24T17:24:21
1462044ms th_a witness.cpp:240 block_production_loo ] slot: 593452 scheduled_witness: 1.6.4 scheduled_time: 2015-08-24T17:24:22 now: 2015-08-24T17:24:22
1463052ms th_a witness.cpp:240 block_production_loo ] slot: 593453 scheduled_witness: 1.6.8 scheduled_time: 2015-08-24T17:24:23 now: 2015-08-24T17:24:23
1464049ms th_a witness.cpp:240 block_production_loo ] slot: 593454 scheduled_witness: 1.6.25 scheduled_time: 2015-08-24T17:24:24 now: 2015-08-24T17:24:24
1465055ms th_a witness.cpp:240 block_production_loo ] slot: 593455 scheduled_witness: 1.6.82 scheduled_time: 2015-08-24T17:24:25 now: 2015-08-24T17:24:25
1466072ms th_a witness.cpp:240 block_production_loo ] slot: 593456 scheduled_witness: 1.6.91 scheduled_time: 2015-08-24T17:24:26 now: 2015-08-24T17:24:26
1467044ms th_a witness.cpp:240 block_production_loo ] slot: 593457 scheduled_witness: 1.6.9 scheduled_time: 2015-08-24T17:24:27 now: 2015-08-24T17:24:27
1468045ms th_a witness.cpp:240 block_production_loo ] slot: 593458 scheduled_witness: 1.6.100 scheduled_time: 2015-08-24T17:24:28 now: 2015-08-24T17:24:28
1469042ms th_a witness.cpp:240 block_production_loo ] slot: 593459 scheduled_witness: 1.6.51 scheduled_time: 2015-08-24T17:24:29 now: 2015-08-24T17:24:29
Chain ID is d011922587473757011118587f93afcc314fbaea094fc1055574721b27975083
--resync-blockchain -s "176.221.43.130:33323" -s "45.55.6.216:1776" -s "114.92.254.159:62015"
and deleting blockchain, p2p folders.iHashFury, how are you determining there are no vote in witnesses in place? Look at my last couple of posts. I noticed I had a non-zero "total_votes" earlier for example.
get_global_properties
in the cli_wallet and watching the output of witness_node in tmux (screen alternative)
info
info
{
"head_block_num": 0,
"head_block_id": "0000000000000000000000000000000000000000",
"head_block_age": "7 days old",
"next_maintenance_time": "45 years ago",
"chain_id": "d011922587473757011118587f93afcc314fbaea094fc1055574721b27975083",
"active_witnesses": [
"1.6.0",
"1.6.1",
"1.6.2",
"1.6.3",
"1.6.4",
"1.6.5",
"1.6.6",
"1.6.7",
"1.6.8",
"1.6.9",
"1.6.10",
"1.6.11",
"1.6.12",
"1.6.13",
"1.6.14",
"1.6.15",
"1.6.16",
"1.6.17",
"1.6.18",
"1.6.19",
"1.6.20",
"1.6.21",
"1.6.22",
"1.6.23",
"1.6.24",
"1.6.25",
"1.6.26",
"1.6.27",
"1.6.28",
"1.6.29",
"1.6.30",
"1.6.31",
"1.6.32",
"1.6.33",
"1.6.34",
"1.6.35",
"1.6.36",
"1.6.37",
"1.6.38",
"1.6.39",
"1.6.40",
"1.6.41",
"1.6.42",
"1.6.43",
"1.6.44",
"1.6.45",
"1.6.46",
"1.6.47",
"1.6.48",
"1.6.49",
"1.6.50",
"1.6.51",
"1.6.52",
"1.6.53",
"1.6.54",
"1.6.55",
"1.6.56",
"1.6.57",
"1.6.58",
"1.6.59",
"1.6.60",
"1.6.61",
"1.6.62",
"1.6.63",
"1.6.64",
"1.6.65",
"1.6.66",
"1.6.67",
"1.6.68",
"1.6.69",
"1.6.70",
"1.6.71",
"1.6.72",
"1.6.73",
"1.6.74",
"1.6.75",
"1.6.76",
"1.6.77",
"1.6.78",
"1.6.79",
"1.6.80",
"1.6.81",
"1.6.82",
"1.6.83",
"1.6.84",
"1.6.85",
"1.6.86",
"1.6.87",
"1.6.88",
"1.6.89",
"1.6.90",
"1.6.91",
"1.6.92",
"1.6.93",
"1.6.94",
"1.6.95",
"1.6.96",
"1.6.97",
"1.6.98",
"1.6.99",
"1.6.100"
],
"active_committee_members": [],
"entropy": "0000000000000000000000000000000000000000"
}
iHashFury, how are you determining there are no vote in witnesses in place? Look at my last couple of posts. I noticed I had a non-zero "total_votes" earlier for example.Code: [Select]get_global_properties
in the cli_wallet and watching the output of witness_node in tmux (screen alternative)
get_witness delegate.verbaltech
{
"id": "1.6.1621",
"witness_account": "1.2.22408",
"signing_key": "GPH52ms1dYJko2v5vS3rCdVLzQBogjeDRc1CpkaZ4seC4J4H7Uc71",
"next_secret_hash": "5c..........................................65",
"previous_secret": "0000000000000000000000000000000000000000",
"vote_id": "1:1621",
"total_votes": "4389187579",
"url": ""
}
info also works in the cli_walletYou're out of sync. Try add more seed nods and resync.Code: [Select]info
info
{
"head_block_num": 0,
"head_block_id": "0000000000000000000000000000000000000000",
"head_block_age": "7 days old",
"next_maintenance_time": "45 years ago",
"chain_id": "d011922587473757011118587f93afcc314fbaea094fc1055574721b27975083",
"active_witnesses": [
"1.6.0",
"1.6.1",
"1.6.2",
"1.6.3",
"1.6.4",
"1.6.5",
"1.6.6",
"1.6.7",
"1.6.8",
"1.6.9",
"1.6.10",
"1.6.11",
"1.6.12",
"1.6.13",
"1.6.14",
"1.6.15",
"1.6.16",
"1.6.17",
"1.6.18",
"1.6.19",
"1.6.20",
"1.6.21",
"1.6.22",
"1.6.23",
"1.6.24",
"1.6.25",
"1.6.26",
"1.6.27",
"1.6.28",
"1.6.29",
"1.6.30",
"1.6.31",
"1.6.32",
"1.6.33",
"1.6.34",
"1.6.35",
"1.6.36",
"1.6.37",
"1.6.38",
"1.6.39",
"1.6.40",
"1.6.41",
"1.6.42",
"1.6.43",
"1.6.44",
"1.6.45",
"1.6.46",
"1.6.47",
"1.6.48",
"1.6.49",
"1.6.50",
"1.6.51",
"1.6.52",
"1.6.53",
"1.6.54",
"1.6.55",
"1.6.56",
"1.6.57",
"1.6.58",
"1.6.59",
"1.6.60",
"1.6.61",
"1.6.62",
"1.6.63",
"1.6.64",
"1.6.65",
"1.6.66",
"1.6.67",
"1.6.68",
"1.6.69",
"1.6.70",
"1.6.71",
"1.6.72",
"1.6.73",
"1.6.74",
"1.6.75",
"1.6.76",
"1.6.77",
"1.6.78",
"1.6.79",
"1.6.80",
"1.6.81",
"1.6.82",
"1.6.83",
"1.6.84",
"1.6.85",
"1.6.86",
"1.6.87",
"1.6.88",
"1.6.89",
"1.6.90",
"1.6.91",
"1.6.92",
"1.6.93",
"1.6.94",
"1.6.95",
"1.6.96",
"1.6.97",
"1.6.98",
"1.6.99",
"1.6.100"
],
"active_committee_members": [],
"entropy": "0000000000000000000000000000000000000000"
}
-s "104.236.51.238:1776" -s "176.221.43.130:33323" -s "45.55.6.216:1776" -s "114.92.254.159:62015" -s 104.156.226.183:60715 -s 104.156.226.183:40479 -s 104.236.255.53:52995 -s 176.9.234.167:34858 -s 176.9.234.167:57727 -s 178.62.88.151:59148 -s 178.62.88.151:41574 -s 188.226.252.109:58843 -s 45.115.36.171:57281 -s 45.55.6.216:37308
I'm going to restart after clearing both the blockchain & p2p folders. My cmd line invocation is: ./witness_node --resync-blockchain -d test_net --enable-stale-production, all other parameters are in the config.ini. After restarting the blockchain age reported by info started at 4 days and is now down to 57 hours. The witness output is much different than before I wiped the blk chn & p2p folders. It looks like I may finally be on the verge of producing blocks! I now see my total_votes is no longer zero.Don't use '--enable-stale-production' parameter. It's for the init node.
Is there a checkpoint I can use to resync faster, should I need to restart?
-s "104.236.51.238:1776" -s "176.221.43.130:33323" -s "45.55.6.216:1776" -s "114.92.254.159:62015" -s 104.156.226.183:60715 -s 104.156.226.183:40479 -s 104.236.255.53:52995 -s 176.9.234.167:34858 -s 176.9.234.167:57727 -s 178.62.88.151:59148 -s 178.62.88.151:41574 -s 188.226.252.109:58843 -s 45.115.36.171:57281 -s 45.55.6.216:37308
Code: [Select]3508896ms th_a application.cpp:265 startup ] Detected unclean shutdown. Replaying blockchain...
It takes too much time to replay blockchain.. How to "cleanly" shutdown the witness node?
Question bump.
I use this strategy:
When I am synced, I shutdown with C-c (it should shutdown in a clean way) and I copy the blockchain folder as backup (if it shutdown without errors).
Everytime my blockchain is corrupted, I remove the blockchain folder and I copy with the backup one and restart the witness.
Finally I backup the blockchain folder every day.
I'm going to restart after clearing both the blockchain & p2p folders. My cmd line invocation is: ./witness_node --resync-blockchain -d test_net --enable-stale-production, all other parameters are in the config.ini. After restarting the blockchain age reported by info started at 4 days and is now down to 57 hours. The witness output is much different than before I wiped the blk chn & p2p folders. It looks like I may finally be on the verge of producing blocks! I now see my total_votes is no longer zero.Don't use '--enable-stale-production' parameter. It's for the init node.Is there a checkpoint I can use to resync faster, should I need to restart?
Try start with more seed nodes:Code: [Select]-s "104.236.51.238:1776" -s "176.221.43.130:33323" -s "45.55.6.216:1776" -s "114.92.254.159:62015" -s 104.156.226.183:60715 -s 104.156.226.183:40479 -s 104.236.255.53:52995 -s 176.9.234.167:34858 -s 176.9.234.167:57727 -s 178.62.88.151:59148 -s 178.62.88.151:41574 -s 188.226.252.109:58843 -s 45.115.36.171:57281 -s 45.55.6.216:37308
# P2P nodes to connect to on startup (may specify multiple times)
seed-node = 114.92.254.159:62015
seed-node = 104.200.28.117:61705
seed-node = 104.236.51.238:1776
seed-node = 176.221.43.130:33323
seed-node = 45.55.6.216:1776
seed-node = 45.55.6.216:1776
seed-node = 45.115.36.171:57281
seed-node = 45.55.6.216:37308
seed-node = 104.200.28.117:61705
seed-node = 104.236.51.238:1776
seed-node = 104.156.226.183:60715
seed-node = 104.156.226.183:40479
seed-node = 104.236.255.53:52995
seed-node = 114.92.254.159:62015
seed-node = 114.92.254.159:62015
seed-node = 176.221.43.130:33323
seed-node = 176.9.234.167:34858
seed-node = 176.9.234.167:57727
seed-node = 178.62.88.151:59148
seed-node = 178.62.88.151:41574
seed-node = 188.226.252.109:58843
{
"head_block_num": 319236,
"head_block_id": "0004df042bd2fc6c4bf3168d7116de8d0e55842f",
"head_block_age": "1 second old",
}
It takes about 10 minutes to resync current chain. But need at least 10 minutes to replay if crash. Backup is a good idea.Code: [Select]{
"head_block_num": 319236,
"head_block_id": "0004df042bd2fc6c4bf3168d7116de8d0e55842f",
"head_block_age": "1 second old",
}
once you sync, do a backup of your blockchain it will help you later on. See spartako's comment.Code: [Select]3508896ms th_a application.cpp:265 startup ] Detected unclean shutdown. Replaying blockchain...
It takes too much time to replay blockchain.. How to "cleanly" shutdown the witness node?
Question bump.
I use this strategy:
When I am synced, I shutdown with C-c (it should shutdown in a clean way) and I copy the blockchain folder as backup (if it shutdown without errors).
Everytime my blockchain is corrupted, I remove the blockchain folder and I copy with the backup one and restart the witness.
Finally I backup the blockchain folder every day.
./witness: line 7: 32471 Segmentation fault ./witness_node --resync-blockchain -d test_net
Latest master should still work with the test net.
#Endpoint for P2P node to listen on
p2p-endpoint = 0.0.0.0:1776
# P2P nodes to connect to on startup (may specify multiple times)
seed-node = 45.55.6.216:1776
seed-node = 45.115.36.171:57281
seed-node = 45.55.6.216:37308
seed-node = 104.200.28.117:61705
seed-node = 104.236.51.238:1776
seed-node = 104.156.226.183:60715
seed-node = 104.156.226.183:40479
seed-node = 104.236.255.53:52995
seed-node = 114.92.254.159:62015
seed-node = 114.92.254.159:62015
seed-node = 176.221.43.130:33323
seed-node = 176.9.234.167:34858
seed-node = 176.9.234.167:57727
seed-node = 178.62.88.151:59148
seed-node = 178.62.88.151:41574
seed-node = 188.226.252.109:58843
# Pairs of [BLOCK_NUM,BLOCK_ID] that should be enforced as checkpoints.
# checkpoint =
# Endpoint for websocket RPC to listen on
rpc-endpoint = 127.0.0.1:8090
# Endpoint for TLS websocket RPC to listen on
# rpc-tls-endpoint =
# The TLS certificate file for this server
# server-pem =
# Password for this certificate
# server-pem-password =
# File to read Genesis State from
#genesis-json = aug-14-test-genesis.json
#genesis-json = aug-19-puppies-test-genesis.json
genesis-json = aug-20-test-genesis.json
# JSON file specifying API permissions
# api-access =
# Enable block production, even if the chain is stale.
enable-stale-production = true
# Percent of witnesses (0-99) that must be participating in order to produce blocks
required-participation = false
# Allow block production, even if the last block was produced by the same witness.
allow-consecutive = false
# ID of witness controlled by this node (e.g. "1.6.0", quotes are required, may specify multiple times)
#witness-id = "1.6.1530"
witness-id = "1.6.1621"
# Tuple of [PublicKey, WIF private key] (may specify multiple times)
# delegate.verbaltech
private-key = ["GPH<public sigining key here>","<private siging key value here>"]
# Account ID to track history for (may specify multiple times)
# track-account =
# Track market history by grouping orders into buckets of equal size measured in seconds specified as a JSON array of numbers
bucket-size = [15,60,300,3600,86400]
# How far back in time to track history for each bucket size, measured in the number of buckets (default: 1000)
history-per-size = 1000
# declare an appender named "stderr" that writes messages to the console
[log.console_appender.stderr]
stream=std_error
# declare an appender named "p2p" that writes messages to p2p.log
[log.file_appender.p2p]
filename=logs/p2p/p2p.log
# filename can be absolute or relative to this config file
# route any messages logged to the default logger to the "stderr" logger we
# declared above, if they are info level are higher
[logger.default]
level=info
appenders=stderr
# route messages sent to the "p2p" logger to the p2p appender declared above
[logger.p2p]
level=debug
appenders=p2p
/usr/include/c++/4.8/bits/shared_ptr.h:614:42: required from ‘std::shared_ptr<_Tp1> std::make_shared(_Args&& ...) [with _Tp = graphene::p2p::peer_connection; _Args = {std::shared_ptr<graphene::p2p::node>&}]’
/home/user/src/graphene8.24/graphene/libraries/p2p/node.cpp:36:67: required from here
/usr/include/c++/4.8/ext/new_allocator.h:120:4: error: no matching function for call to ‘graphene::p2p::peer_connection::peer_connection(std::shared_ptr<graphene::p2p::node>&)’
{ ::new((void *)__p) _Up(std::forward<_Args>(__args)...); }
^
/usr/include/c++/4.8/ext/new_allocator.h:120:4: note: candidate is:
In file included from /home/user/src/graphene8.24/graphene/libraries/p2p/include/graphene/p2p/node.hpp:5:0,
from /home/user/src/graphene8.24/graphene/libraries/p2p/node.cpp:1:
/home/user/src/graphene8.24/graphene/libraries/p2p/include/graphene/p2p/peer_connection.hpp:55:9: note: graphene::p2p::peer_connection::peer_connection()
class peer_connection : public message_oriented_connection_delegate,
^
/home/user/src/graphene8.24/graphene/libraries/p2p/include/graphene/p2p/peer_connection.hpp:55:9: note: candidate expects 0 arguments, 1 provided
make[2]: *** [libraries/p2p/CMakeFiles/graphene_p2p.dir/node.cpp.o] Error 1
make[1]: *** [libraries/p2p/CMakeFiles/graphene_p2p.dir/all] Error 2
make: *** [all] Error 2
user@user-desktop:~/src/graphene8.24/graphene$
[ 85%] Building CXX object libraries/p2p/CMakeFiles/graphene_p2p.dir/node.cpp.o
BMs commit seems to have broken the build of the tests. However, if you just do a make witness_node that should work. (Or you can use my automatically generated docker build that was pushed 10 minutes after the commit :)thanks
https://hub.docker.com/r/sile16/graphene-witness/
I'm now pushing each commit as a separate tag
That has always allowed me to restart, but now I can't without this crashing. Here's the output:Code: [Select]./witness: line 7: 32471 Segmentation fault ./witness_node --resync-blockchain -d test_net
$ gdb
...
(gdb) file ./witness_node
...
(gdb) set args --resync-blockchain -d test_net
...
(gdb) run
...
(gdb) signal SIGINT
In the interest of not slipping the release date we are going to fall back to 3 or 5 second blocks using the current P2P code. Then after we update the P2P code we can increase the block rate to 2 and ultimately 1 second block times.
Ben is in the process of preparing instructions for committee members on how to change the block interval and we plan to dynamically update the test network to prove that we can do this on a live network.
#as user
BOOST_ROOT=$HOME/tmp/boost_1_57_0
git clone https://github.com/cryptonomex/graphene.git
cd graphene
git pull
#git checkout test1
git checkout master
git submodule update --init --recursive
#make clean
#cmake -DCMAKE_BUILD_TYPE=Debug .
cmake BOOST_ROOT=$HOME/tmp/boost_1_57_0 .
make
In the interest of not slipping the release date we are going to fall back to 3 or 5 second blocks using the current P2P code. Then after we update the P2P code we can increase the block rate to 2 and ultimately 1 second block times.
Ben is in the process of preparing instructions for committee members on how to change the block interval and we plan to dynamically update the test network to prove that we can do this on a live network.
In the interest of not slipping the release date we are going to fall back to 3 or 5 second blocks using the current P2P code. Then after we update the P2P code we can increase the block rate to 2 and ultimately 1 second block times.
Ben is in the process of preparing instructions for committee members on how to change the block interval and we plan to dynamically update the test network to prove that we can do this on a live network.
+5% +5% step by step
in fact we have make so many great things.
but we give it to public too easy,
people don't cherish when they get it too easy.
BMs commit seems to have broken the build of the tests. However, if you just do a make witness_node that should work. (Or you can use my automatically generated docker build that was pushed 10 minutes after the commit :)Thanks. Running with latest commit now.
https://hub.docker.com/r/sile16/graphene-witness/
I'm now pushing each commit as a separate tag
we should still use 10 seconds first, and release a user friendly wallet.
and we can begin the marketing work.
then we try 5 seconds, then 3 secs, then 2 secs, then 1 secs
every time we make an improve, we should look it as a big thing, it's a chance to push the marketing work.
in fact we have make so many great things.
but we give it to public too easy,
people don't cherish when they get it too easy.
Me too .. +1we should still use 10 seconds first, and release a user friendly wallet.
and we can begin the marketing work.
then we try 5 seconds, then 3 secs, then 2 secs, then 1 secs
every time we make an improve, we should look it as a big thing, it's a chance to push the marketing work.
in fact we have make so many great things.
but we give it to public too easy,
people don't cherish when they get it too easy.
I support this idea +5%
Me too .. +1we should still use 10 seconds first, and release a user friendly wallet.
and we can begin the marketing work.
then we try 5 seconds, then 3 secs, then 2 secs, then 1 secs
every time we make an improve, we should look it as a big thing, it's a chance to push the marketing work.
in fact we have make so many great things.
but we give it to public too easy,
people don't cherish when they get it too easy.
I support this idea +5%
maybe I +5%Me too .. +1we should still use 10 seconds first, and release a user friendly wallet.
and we can begin the marketing work.
then we try 5 seconds, then 3 secs, then 2 secs, then 1 secs
every time we make an improve, we should look it as a big thing, it's a chance to push the marketing work.
in fact we have make so many great things.
but we give it to public too easy,
people don't cherish when they get it too easy.
I support this idea +5%
+5%
we should still use 10 seconds first, and release a user friendly wallet.
and we can begin the marketing work.
then we try 5 seconds, then 3 secs, then 2 secs, then 1 secs
every time we make an improve, we should look it as a big thing, it's a chance to push the marketing work.
in fact we have make so many great things.
but we give it to public too easy,
people don't cherish when they get it too easy.
I don't know if it's the right place to ask but since we're talking about nodes. Will BitShares 2.0 allow -even if later through a worked proposal- for nodes with limited storage space, to store parts or slices of the blockchain history? Imagine storing only X latest blocks. Could allow for more people with limited storage space to join in I guess? Or would this only contribute to less nodes hosting the full blockchain? Since people could think and we know this is common: "someone else will do it".
Hmm, the latest build seems much more stable than before. BTW, is flood_network command disabled? I got a segmentation fault error.
can't build now, seems the new p2p source code can't work, but you have add this line at CMakeLists.txt:
+add_subdirectory( p2p )
try to checkout aeebb1be099fd325f014f4f35aa9e90bf2431839
can't build now, seems the new p2p source code can't work, but you have add this line at CMakeLists.txt:
+add_subdirectory( p2p )
try to checkout aeebb1be099fd325f014f4f35aa9e90bf2431839
make witness_node cli_wallet
Hmm, the latest build seems much more stable than before. BTW, is flood_network command disabled? I got a segmentation fault error.Not so stable for me.. I still encounter segment fault error frequently. Like https://github.com/cryptonomex/graphene/issues/261. Perhaps because of my unstable network connection?
we should still use 10 seconds first, and release a user friendly wallet.
and we can begin the marketing work.
then we try 5 seconds, then 3 secs, then 2 secs, then 1 secs
every time we make an improve, we should look it as a big thing, it's a chance to push the marketing work.
in fact we have make so many great things.
but we give it to public too easy,
people don't cherish when they get it too easy.In the interest of not slipping the release date we are going to fall back to 3 or 5 second blocks using the current P2P code. Then after we update the P2P code we can increase the block rate to 2 and ultimately 1 second block times.
Ben is in the process of preparing instructions for committee members on how to change the block interval and we plan to dynamically update the test network to prove that we can do this on a live network.
+5% +5% step by step
several times, my witeness node blocked at some block
I watch the p2p log, it received block 419543, then 419544, then missed 419545, then received 419546, .......
and it will never receive block 419545, I have to restart witness node, then it can request the block from other node.
I think we need a logic, when the witness node have missed a block more than 10 blocks, it should request from other node.
I just killed the chain. Will be back with a new test network later this week.You certainly need to think about your wording .. unless you want people to believe you actually CAN kill the blockchain.
Get this error:pretty sure that means that key has no balance.Code: [Select]10 assert_exception: Assert Exception
operations.size() > 0: A transaction must have at least one operation
{"trx":{"ref_block_num":0,"ref_block_prefix":0,"expiration":"1970-01-01T00:00:00","operations":[],"extensions":[]}}
th_a transaction.cpp:51 validate
{"name_or_id":"the-ae"}
th_a wallet.cpp:2762 import_balance
when I try to import balance...
unlocked >>> import_balance nathan [5KQwrPbwdL6PhXujxW37FSSQZ1JiwsST4cqQzDeyXtP79zkvFD3] true
import_balance nathan [5KQwrPbwdL6PhXujxW37FSSQZ1JiwsST4cqQzDeyXtP79zkvFD3] true
1417187ms th_a wallet.cpp:2721 import_balance ] balances: []
10 assert_exception: Assert Exception
operations.size() > 0: A transaction must have at least one operation
{"trx":{"ref_block_num":0,"ref_block_prefix":0,"expiration":"1970-01-01T00:00:00","operations":[],"extensions":[]}}
th_a transaction.cpp:51 validate
{"name_or_id":"nathan"}
th_a wallet.cpp:2762 import_balance
unlocked >>>
Can't wait. Where is the new code?several times, my witeness node blocked at some block
I watch the p2p log, it received block 419543, then 419544, then missed 419545, then received 419546, .......
and it will never receive block 419545, I have to restart witness node, then it can request the block from other node.
I think we need a logic, when the witness node have missed a block more than 10 blocks, it should request from other node.
Eric checked in some fixes that may address this particular problem. We do have logic to request from another node, but the timeouts were still tuned to BTS 1 timeframes so we reduced the timeout for BTS 2.
I just killed the chain. Will be back with a new test network later this week.You certainly need to think about your wording .. unless you want people to believe you actually CAN kill the blockchain.
I assume you just switched of your initial witness nodes ..
Just to follow up my ask in the Mumble - please consider getting this/a test net up again whilst one for the GUI is being worked on. I have been using it to test the cli_wallet API with the 2.0 version of my bitshares Ruby Gem & can't proceed until it is back up :-\is there a reason you need an actual network up? Couldn't you just run a local instance on and test that way?
Thx & have a good weekend all
The test network has been delayed because we have a random crash in block production that we just now identified the cause of.
It took me 1M blocks to reproduce the issue on my local test net!
Thanks puppies, will see how far I can get with that & leave 'flood_network' & other network commands until there is onethe vps I'm using for testing graphene is just sitting around right now. If you need peers to join you let me know.
Yeah same hereThanks puppies, will see how far I can get with that & leave 'flood_network' & other network commands until there is onethe vps I'm using for testing graphene is just sitting around right now. If you need peers to join you let me know.
The test network has been delayed because we have a random crash in block production that we just now identified the cause of.
It took me 1M blocks to reproduce the issue on my local test net!
Somewhat disappointed but it's understandable. Can you tell ETA?
The test network has been delayed because we have a random crash in block production that we just now identified the cause of.
It took me 1M blocks to reproduce the issue on my local test net!
Somewhat disappointed but it's understandable. Can you tell ETA?
what is the nature of your disappointment, I dont understand?
Thanks puppies, will see how far I can get with that & leave 'flood_network' & other network commands until there is onethe vps I'm using for testing graphene is just sitting around right now. If you need peers to join you let me know.
I'm not running anything right now. I could run a gui node, but you would not be able to sign up for new accounts with it. You would have to import.Thanks puppies, will see how far I can get with that & leave 'flood_network' & other network commands until there is onethe vps I'm using for testing graphene is just sitting around right now. If you need peers to join you let me know.
Are you running GUI node too?
fatal: reference is not a tree: 80d967a70d21d26d27ef3a1544a177925b2a7bbe
Unable to checkout '80d967a70d21d26d27ef3a1544a177925b2a7bbe' in submodule path 'libraries/fc'
CMake Error at CMakeLists.txt:36 (include):
include could not find load file:
GetGitRevisionDescription
CMake Error at CMakeLists.txt:37 (get_git_head_revision):
Unknown CMake command "get_git_head_revision".
https://github.com/cryptonomex/graphene/releases/tag/test2
How can I update fc submodule wit 71be796af50c407281a40e61e4199a87e0a19314 ? Please explain for a dummy
Anyone else getting this when trying to update the submoduleCode: [Select]fatal: reference is not a tree: 80d967a70d21d26d27ef3a1544a177925b2a7bbe
Unable to checkout '80d967a70d21d26d27ef3a1544a177925b2a7bbe' in submodule path 'libraries/fc'
How can I update fc submodule wit 71be796af50c407281a40e61e4199a87e0a19314 ? Please explain for a dummy
git pull
git checkout test2
git submodule update --init --recursive
cd libraries
rm -r fc
git clone https://github.com/cryptonomex/fc.git
cd fc
git submodule update --init --recursive
cd ../..
cmake .
make
How can I update fc submodule wit 71be796af50c407281a40e61e4199a87e0a19314 ? Please explain for a dummy
This is how I did it. It is still building so I don't know if it worked yet.Code: [Select]git pull
git checkout test2
git submodule update --init --recursive
cd libraries
rm -r fc
git clone https://github.com/cryptonomex/fc.git
cd fc
git submodule update --init --recursive
cd ../..
cmake .
make
/home/user/src/graphene/libraries/plugins/witness/witness.cpp:229:10: note: in expansion of macro ‘elog’
elog("Not producing block because node appears to be on a minority fork with only ${pct}% witness participation", (capture) );
^
make[2]: *** [libraries/plugins/witness/CMakeFiles/graphene_witness.dir/witness.cpp.o] Error 1
make[1]: *** [libraries/plugins/witness/CMakeFiles/graphene_witness.dir/all] Error 2
make: *** [all] Error 2
user@user-desktop:~/src/graphene$
build failed for me. Please let me know if you find a way around it.Code: [Select]/home/user/src/graphene/libraries/plugins/witness/witness.cpp:229:10: note: in expansion of macro ‘elog’
elog("Not producing block because node appears to be on a minority fork with only ${pct}% witness participation", (capture) );
^
make[2]: *** [libraries/plugins/witness/CMakeFiles/graphene_witness.dir/witness.cpp.o] Error 1
make[1]: *** [libraries/plugins/witness/CMakeFiles/graphene_witness.dir/all] Error 2
make: *** [all] Error 2
user@user-desktop:~/src/graphene$
build failed for me. Please let me know if you find a way around it.Code: [Select]/home/user/src/graphene/libraries/plugins/witness/witness.cpp:229:10: note: in expansion of macro ‘elog’
elog("Not producing block because node appears to be on a minority fork with only ${pct}% witness participation", (capture) );
^
make[2]: *** [libraries/plugins/witness/CMakeFiles/graphene_witness.dir/witness.cpp.o] Error 1
make[1]: *** [libraries/plugins/witness/CMakeFiles/graphene_witness.dir/all] Error 2
make: *** [all] Error 2
user@user-desktop:~/src/graphene$
Same error occurred. Hopefully devs will fix it ASAP.
libraries/plugins/witness/witness.cpp
line 214: ilog("Generated block #${n} with timestamp ${t} at time ${c}", ("n",capture)("t",capture)("c",capture));
line 226: ilog("Not producing block because I don't have the private key for ${scheduled_key}", ("scheduled_key",capture) );
line 229: elog("Not producing block because node appears to be on a minority fork with only ${pct}% witness participation", ("pct",capture) );
Still have same problem with test2 tag.several times, my witeness node blocked at some block
I watch the p2p log, it received block 419543, then 419544, then missed 419545, then received 419546, .......
and it will never receive block 419545, I have to restart witness node, then it can request the block from other node.
I think we need a logic, when the witness node have missed a block more than 10 blocks, it should request from other node.
Eric checked in some fixes that may address this particular problem. We do have logic to request from another node, but the timeouts were still tuned to BTS 1 timeframes so we reduced the timeout for BTS 2.
I'm up with 'test2' code and aug-31 genesis.
witness id 1.6.5247, node address 114.92.254.159:62015
./witness_node --rpc-endpoint "127.0.0.1:8090" --genesis-json aug-31-testnet-genesis.json -d test_net_2 -s "104.236.118.105:1776"
...
"1.6.100",
"1.6.5247"
Still have same problem with test2 tag.several times, my witeness node blocked at some block
I watch the p2p log, it received block 419543, then 419544, then missed 419545, then received 419546, .......
and it will never receive block 419545, I have to restart witness node, then it can request the block from other node.
I think we need a logic, when the witness node have missed a block more than 10 blocks, it should request from other node.
Eric checked in some fixes that may address this particular problem. We do have logic to request from another node, but the timeouts were still tuned to BTS 1 timeframes so we reduced the timeout for BTS 2.
Please check this issue https://github.com/cryptonomex/graphene/issues/264
My intuition told me that it is not fixed completely.. The "_max_size" matters.Still have same problem with test2 tag.several times, my witeness node blocked at some block
I watch the p2p log, it received block 419543, then 419544, then missed 419545, then received 419546, .......
and it will never receive block 419545, I have to restart witness node, then it can request the block from other node.
I think we need a logic, when the witness node have missed a block more than 10 blocks, it should request from other node.
Eric checked in some fixes that may address this particular problem. We do have logic to request from another node, but the timeouts were still tuned to BTS 1 timeframes so we reduced the timeout for BTS 2.
Please check this issue https://github.com/cryptonomex/graphene/issues/264
In the test2b tag I have increase the allowance for out-of-order blocks. If it still happens then something else is going on.
Anyone else getting this when trying to update the submoduleCode: [Select]fatal: reference is not a tree: 80d967a70d21d26d27ef3a1544a177925b2a7bbe
Unable to checkout '80d967a70d21d26d27ef3a1544a177925b2a7bbe' in submodule path 'libraries/fc'
Yes, the test2b tag works fine.Anyone else getting this when trying to update the submoduleCode: [Select]fatal: reference is not a tree: 80d967a70d21d26d27ef3a1544a177925b2a7bbe
Unable to checkout '80d967a70d21d26d27ef3a1544a177925b2a7bbe' in submodule path 'libraries/fc'
Sorry about that, I goofed -- I forgot to push a commit to fc. Try again, it should be good now.
300404ms th_a application.cpp:356 handle_block ] Got block #15325 from network
300507ms th_a api.cpp:867 on_objects_changed ] ids: ["2.8.15325","2.3.0","2.4.32","2.4.31","2.4.30","2.4.29","2.4.28","2.4.27","2.4.26","2.4.25","2.4.24","2.4.23","2.4.22","1.6.91","1.14.51","2.4.15","1.6.90","1.14.50","2.4.14","1.6.89","1.14.49","2.4.13","1.6.88","1.14.48","2.4.12","1.6.87","1.14.47","2.4.11","1.6.86","1.14.46","2.4.10","1.6.85","1.14.45","2.4.9","1.6.84","1.14.44","2.4.8","1.6.82","1.14.42","2.4.6","1.6.80","1.14.40","2.4.4","1.6.79","1.14.39","2.4.3","1.6.78","1.14.38","1.6.77","1.14.37","1.6.76","1.14.36","1.6.75","1.14.35","1.6.74","1.14.34","1.6.73","1.6.5247","1.14.33","1.6.71","1.14.31","1.6.70","1.14.30","1.6.69","1.14.29","1.6.68","1.14.28","1.6.67","1.14.27","1.6.66","1.14.26","1.6.65","1.14.25","1.6.64","1.14.24","1.6.63","1.14.23","1.6.62","1.14.22","1.6.61","2.1.0","1.14.21","1.6.60","1.14.20","1.6.59","1.14.19","1.6.58","1.14.18","1.6.56","1.14.16","1.6.54","1.14.14","1.6.53","1.14.13","1.6.52","1.14.12","1.6.51","1.14.11","1.6.50","1.14.10","1.6.49","1.14.9","1.6.48","1.14.8","1.6.47","1.14.7","1.6.46","1.14.6","1.6.45","1.6.44","1.14.4","1.6.43","1.14.3","1.6.42","1.14.2","1.6.41","1.14.1","1.6.16","1.6.15","1.6.13","1.6.12","1.6.11","1.6.10","1.6.8","1.6.55","1.14.15","1.6.81","1.14.41","2.4.5","1.14.5","1.13.74","1.6.7","1.6.6","1.6.5","2.4.2","1.5.10","1.6.14","1.6.57","1.14.17","1.6.17","1.6.83","1.14.43","2.4.7","1.6.1538","1.6.34","2.0.0","1.6.72","1.14.32","1.6.1527","1.6.23","1.6.1","1.5.6","1.6.2","1.5.7","1.6.9","1.6.30","1.6.3","2.4.0","1.5.8","1.6.4","2.4.1","1.5.9","1.6.18","1.6.19","1.6.20","1.6.21","1.6.22","1.6.24","1.6.25","1.6.26","1.6.27","1.6.28","1.6.29","1.6.31","1.6.32","1.6.33","1.6.35","1.6.36","1.6.37","1.6.38","1.6.39","1.6.40","1.14.0","1.6.92","1.14.52","2.4.16","1.6.93","1.14.53","2.4.17","1.6.94","1.14.54","2.4.18","1.6.95","1.14.55","2.4.19","1.6.96","1.14.56","2.4.20","1.6.97","1.14.57","2.4.21","1.6.98","1.2.1","1.5.0","1.5.1","1.5.2","1.5.3","1.5.4","1.5.5","1.2.0","1.2.2"]
300508ms th_a api.cpp:867 on_objects_changed ] ids: ["2.8.15325","2.3.0","2.4.32","2.4.31","2.4.30","2.4.29","2.4.28","2.4.27","2.4.26","2.4.25","2.4.24","2.4.23","2.4.22","1.6.91","1.14.51","2.4.15","1.6.90","1.14.50","2.4.14","1.6.89","1.14.49","2.4.13","1.6.88","1.14.48","2.4.12","1.6.87","1.14.47","2.4.11","1.6.86","1.14.46","2.4.10","1.6.85","1.14.45","2.4.9","1.6.84","1.14.44","2.4.8","1.6.82","1.14.42","2.4.6","1.6.80","1.14.40","2.4.4","1.6.79","1.14.39","2.4.3","1.6.78","1.14.38","1.6.77","1.14.37","1.6.76","1.14.36","1.6.75","1.14.35","1.6.74","1.14.34","1.6.73","1.6.5247","1.14.33","1.6.71","1.14.31","1.6.70","1.14.30","1.6.69","1.14.29","1.6.68","1.14.28","1.6.67","1.14.27","1.6.66","1.14.26","1.6.65","1.14.25","1.6.64","1.14.24","1.6.63","1.14.23","1.6.62","1.14.22","1.6.61","2.1.0","1.14.21","1.6.60","1.14.20","1.6.59","1.14.19","1.6.58","1.14.18","1.6.56","1.14.16","1.6.54","1.14.14","1.6.53","1.14.13","1.6.52","1.14.12","1.6.51","1.14.11","1.6.50","1.14.10","1.6.49","1.14.9","1.6.48","1.14.8","1.6.47","1.14.7","1.6.46","1.14.6","1.6.45","1.6.44","1.14.4","1.6.43","1.14.3","1.6.42","1.14.2","1.6.41","1.14.1","1.6.16","1.6.15","1.6.13","1.6.12","1.6.11","1.6.10","1.6.8","1.6.55","1.14.15","1.6.81","1.14.41","2.4.5","1.14.5","1.13.74","1.6.7","1.6.6","1.6.5","2.4.2","1.5.10","1.6.14","1.6.57","1.14.17","1.6.17","1.6.83","1.14.43","2.4.7","1.6.1538","1.6.34","2.0.0","1.6.72","1.14.32","1.6.1527","1.6.23","1.6.1","1.5.6","1.6.2","1.5.7","1.6.9","1.6.30","1.6.3","2.4.0","1.5.8","1.6.4","2.4.1","1.5.9","1.6.18","1.6.19","1.6.20","1.6.21","1.6.22","1.6.24","1.6.25","1.6.26","1.6.27","1.6.28","1.6.29","1.6.31","1.6.32","1.6.33","1.6.35","1.6.36","1.6.37","1.6.38","1.6.39","1.6.40","1.14.0","1.6.92","1.14.52","2.4.16","1.6.93","1.14.53","2.4.17","1.6.94","1.14.54","2.4.18","1.6.95","1.14.55","2.4.19","1.6.96","1.14.56","2.4.20","1.6.97","1.14.57","2.4.21","1.6.98","1.2.1","1.5.0","1.5.1","1.5.2","1.5.3","1.5.4","1.5.5","1.2.0","1.2.2"]
301000ms th_a witness.cpp:223 block_production_loo ] Not producing block because slot has not yet arrived
301000ms th_a witness.cpp:187 schedule_production_ ] now.time_since_epoch().count(): 1441134300997030 next_second: 2015-09-01T19:05:02
302000ms th_a witness.cpp:223 block_production_loo ] Not producing block because slot has not yet arrived
302000ms th_a witness.cpp:187 schedule_production_ ] now.time_since_epoch().count(): 1441134301997280 next_second: 2015-09-01T19:05:03
303000ms th_a witness.cpp:223 block_production_loo ] Not producing block because slot has not yet arrived
303000ms th_a witness.cpp:187 schedule_production_ ] now.time_since_epoch().count(): 1441134302997131 next_second: 2015-09-01T19:05:04
304000ms th_a witness.cpp:223 block_production_loo ] Not producing block because slot has not yet arrived
304000ms th_a witness.cpp:187 schedule_production_ ] now.time_since_epoch().count(): 1441134303997090 next_second: 2015-09-01T19:05:05
305000ms th_a witness.cpp:223 block_production_loo ] Not producing block because slot has not yet arrived
305000ms th_a witness.cpp:187 schedule_production_ ] now.time_since_epoch().count(): 1441134304997093 next_second: 2015-09-01T19:05:06
306000ms th_a witness.cpp:223 block_production_loo ] Not producing block because slot has not yet arrived
306000ms th_a witness.cpp:187 schedule_production_ ] now.time_since_epoch().count(): 1441134305997265 next_second: 2015-09-01T19:05:07
307000ms th_a witness.cpp:223 block_production_loo ] Not producing block because slot has not yet arrived
307000ms th_a witness.cpp:187 schedule_production_ ] now.time_since_epoch().count(): 1441134306997192 next_second: 2015-09-01T19:05:08
308000ms th_a witness.cpp:223 block_production_loo ] Not producing block because slot has not yet arrived
308000ms th_a witness.cpp:187 schedule_production_ ] now.time_since_epoch().count(): 1441134307997036 next_second: 2015-09-01T19:05:09
309000ms th_a witness.cpp:223 block_production_loo ] Not producing block because slot has not yet arrived
309000ms th_a witness.cpp:187 schedule_production_ ] now.time_since_epoch().count(): 1441134308997313 next_second: 2015-09-01T19:05:10
310000ms th_a witness.cpp:223 block_production_loo ] Not producing block because slot has not yet arrived
310000ms th_a witness.cpp:187 schedule_production_ ] now.time_since_epoch().count(): 1441134309997066 next_second: 2015-09-01T19:05:11
311000ms th_a witness.cpp:223 block_production_loo ] Not producing block because slot has not yet arrived
311000ms th_a witness.cpp:187 schedule_production_ ] now.time_since_epoch().count(): 1441134310997084 next_second: 2015-09-01T19:05:12
312000ms th_a witness.cpp:223 block_production_loo ] Not producing block because slot has not yet arrived
312000ms th_a witness.cpp:187 schedule_production_ ] now.time_since_epoch().count(): 1441134311997095 next_second: 2015-09-01T19:05:13
313000ms th_a witness.cpp:223 block_production_loo ] Not producing block because slot has not yet arrived
313000ms th_a witness.cpp:187 schedule_production_ ] now.time_since_epoch().count(): 1441134312997001 next_second: 2015-09-01T19:05:14
314000ms th_a witness.cpp:223 block_production_loo ] Not producing block because slot has not yet arrived
314000ms th_a witness.cpp:187 schedule_production_ ] now.time_since_epoch().count(): 1441134313997077 next_second: 2015-09-01T19:05:15
315000ms th_a witness.cpp:223 block_production_loo ] Not producing block because slot has not yet arrived
315000ms th_a witness.cpp:187 schedule_production_ ] now.time_since_epoch().count(): 1441134314997092 next_second: 2015-09-01T19:05:16
316000ms th_a witness.cpp:223 block_production_loo ] Not producing block because slot has not yet arrived
316000ms th_a witness.cpp:187 schedule_production_ ] now.time_since_epoch().count(): 1441134315997044 next_second: 2015-09-01T19:05:17
317000ms th_a witness.cpp:223 block_production_loo ] Not producing block because slot has not yet arrived
317000ms th_a witness.cpp:187 schedule_production_ ] now.time_since_epoch().count(): 1441134316997130 next_second: 2015-09-01T19:05:18
318000ms th_a witness.cpp:223 block_production_loo ] Not producing block because slot has not yet arrived
318000ms th_a witness.cpp:187 schedule_production_ ] now.time_since_epoch().count(): 1441134317997088 next_second: 2015-09-01T19:05:19
319000ms th_a witness.cpp:223 block_production_loo ] Not producing block because slot has not yet arrived
319000ms th_a witness.cpp:187 schedule_production_ ] now.time_since_epoch().count(): 1441134318997059 next_second: 2015-09-01T19:05:20
320000ms th_a witness.cpp:220 block_production_loo ] Not producing block because it isn't my turn
320000ms th_a witness.cpp:187 schedule_production_ ] now.time_since_epoch().count(): 1441134319997193 next_second: 2015-09-01T19:05:21
320276ms th_a application.cpp:356 handle_block ] Got block #15326 from network
320277ms th_a api.cpp:867 on_objects_changed ] ids: ["2.8.15326","1.6.20","1.13.127","2.1.0"]
320277ms th_a api.cpp:867 on_objects_changed ] ids: ["2.8.15326","1.6.20","1.13.127","2.1.0"]
Why does the following happen? Sometimes it takes about 20 seconds to sync the blockI think that's the maintenance period.Code: [Select]300404ms th_a application.cpp:356 handle_block ] Got block #15325 from network
300507ms th_a api.cpp:867 on_objects_changed ] ids: ["2.8.15325","2.3.0","2.4.32","2.4.31","2.4.30","2.4.29","2.4.28","2.4.27","2.4.26","2.4.25","2.4.24","2.4.23","2.4.22","1.6.91","1.14.51","2.4.15","1.6.90","1.14.50","2.4.14","1.6.89","1.14.49","2.4.13","1.6.88","1.14.48","2.4.12","1.6.87","1.14.47","2.4.11","1.6.86","1.14.46","2.4.10","1.6.85","1.14.45","2.4.9","1.6.84","1.14.44","2.4.8","1.6.82","1.14.42","2.4.6","1.6.80","1.14.40","2.4.4","1.6.79","1.14.39","2.4.3","1.6.78","1.14.38","1.6.77","1.14.37","1.6.76","1.14.36","1.6.75","1.14.35","1.6.74","1.14.34","1.6.73","1.6.5247","1.14.33","1.6.71","1.14.31","1.6.70","1.14.30","1.6.69","1.14.29","1.6.68","1.14.28","1.6.67","1.14.27","1.6.66","1.14.26","1.6.65","1.14.25","1.6.64","1.14.24","1.6.63","1.14.23","1.6.62","1.14.22","1.6.61","2.1.0","1.14.21","1.6.60","1.14.20","1.6.59","1.14.19","1.6.58","1.14.18","1.6.56","1.14.16","1.6.54","1.14.14","1.6.53","1.14.13","1.6.52","1.14.12","1.6.51","1.14.11","1.6.50","1.14.10","1.6.49","1.14.9","1.6.48","1.14.8","1.6.47","1.14.7","1.6.46","1.14.6","1.6.45","1.6.44","1.14.4","1.6.43","1.14.3","1.6.42","1.14.2","1.6.41","1.14.1","1.6.16","1.6.15","1.6.13","1.6.12","1.6.11","1.6.10","1.6.8","1.6.55","1.14.15","1.6.81","1.14.41","2.4.5","1.14.5","1.13.74","1.6.7","1.6.6","1.6.5","2.4.2","1.5.10","1.6.14","1.6.57","1.14.17","1.6.17","1.6.83","1.14.43","2.4.7","1.6.1538","1.6.34","2.0.0","1.6.72","1.14.32","1.6.1527","1.6.23","1.6.1","1.5.6","1.6.2","1.5.7","1.6.9","1.6.30","1.6.3","2.4.0","1.5.8","1.6.4","2.4.1","1.5.9","1.6.18","1.6.19","1.6.20","1.6.21","1.6.22","1.6.24","1.6.25","1.6.26","1.6.27","1.6.28","1.6.29","1.6.31","1.6.32","1.6.33","1.6.35","1.6.36","1.6.37","1.6.38","1.6.39","1.6.40","1.14.0","1.6.92","1.14.52","2.4.16","1.6.93","1.14.53","2.4.17","1.6.94","1.14.54","2.4.18","1.6.95","1.14.55","2.4.19","1.6.96","1.14.56","2.4.20","1.6.97","1.14.57","2.4.21","1.6.98","1.2.1","1.5.0","1.5.1","1.5.2","1.5.3","1.5.4","1.5.5","1.2.0","1.2.2"]
300508ms th_a api.cpp:867 on_objects_changed ] ids: ["2.8.15325","2.3.0","2.4.32","2.4.31","2.4.30","2.4.29","2.4.28","2.4.27","2.4.26","2.4.25","2.4.24","2.4.23","2.4.22","1.6.91","1.14.51","2.4.15","1.6.90","1.14.50","2.4.14","1.6.89","1.14.49","2.4.13","1.6.88","1.14.48","2.4.12","1.6.87","1.14.47","2.4.11","1.6.86","1.14.46","2.4.10","1.6.85","1.14.45","2.4.9","1.6.84","1.14.44","2.4.8","1.6.82","1.14.42","2.4.6","1.6.80","1.14.40","2.4.4","1.6.79","1.14.39","2.4.3","1.6.78","1.14.38","1.6.77","1.14.37","1.6.76","1.14.36","1.6.75","1.14.35","1.6.74","1.14.34","1.6.73","1.6.5247","1.14.33","1.6.71","1.14.31","1.6.70","1.14.30","1.6.69","1.14.29","1.6.68","1.14.28","1.6.67","1.14.27","1.6.66","1.14.26","1.6.65","1.14.25","1.6.64","1.14.24","1.6.63","1.14.23","1.6.62","1.14.22","1.6.61","2.1.0","1.14.21","1.6.60","1.14.20","1.6.59","1.14.19","1.6.58","1.14.18","1.6.56","1.14.16","1.6.54","1.14.14","1.6.53","1.14.13","1.6.52","1.14.12","1.6.51","1.14.11","1.6.50","1.14.10","1.6.49","1.14.9","1.6.48","1.14.8","1.6.47","1.14.7","1.6.46","1.14.6","1.6.45","1.6.44","1.14.4","1.6.43","1.14.3","1.6.42","1.14.2","1.6.41","1.14.1","1.6.16","1.6.15","1.6.13","1.6.12","1.6.11","1.6.10","1.6.8","1.6.55","1.14.15","1.6.81","1.14.41","2.4.5","1.14.5","1.13.74","1.6.7","1.6.6","1.6.5","2.4.2","1.5.10","1.6.14","1.6.57","1.14.17","1.6.17","1.6.83","1.14.43","2.4.7","1.6.1538","1.6.34","2.0.0","1.6.72","1.14.32","1.6.1527","1.6.23","1.6.1","1.5.6","1.6.2","1.5.7","1.6.9","1.6.30","1.6.3","2.4.0","1.5.8","1.6.4","2.4.1","1.5.9","1.6.18","1.6.19","1.6.20","1.6.21","1.6.22","1.6.24","1.6.25","1.6.26","1.6.27","1.6.28","1.6.29","1.6.31","1.6.32","1.6.33","1.6.35","1.6.36","1.6.37","1.6.38","1.6.39","1.6.40","1.14.0","1.6.92","1.14.52","2.4.16","1.6.93","1.14.53","2.4.17","1.6.94","1.14.54","2.4.18","1.6.95","1.14.55","2.4.19","1.6.96","1.14.56","2.4.20","1.6.97","1.14.57","2.4.21","1.6.98","1.2.1","1.5.0","1.5.1","1.5.2","1.5.3","1.5.4","1.5.5","1.2.0","1.2.2"]
301000ms th_a witness.cpp:223 block_production_loo ] Not producing block because slot has not yet arrived
301000ms th_a witness.cpp:187 schedule_production_ ] now.time_since_epoch().count(): 1441134300997030 next_second: 2015-09-01T19:05:02
302000ms th_a witness.cpp:223 block_production_loo ] Not producing block because slot has not yet arrived
302000ms th_a witness.cpp:187 schedule_production_ ] now.time_since_epoch().count(): 1441134301997280 next_second: 2015-09-01T19:05:03
303000ms th_a witness.cpp:223 block_production_loo ] Not producing block because slot has not yet arrived
303000ms th_a witness.cpp:187 schedule_production_ ] now.time_since_epoch().count(): 1441134302997131 next_second: 2015-09-01T19:05:04
304000ms th_a witness.cpp:223 block_production_loo ] Not producing block because slot has not yet arrived
304000ms th_a witness.cpp:187 schedule_production_ ] now.time_since_epoch().count(): 1441134303997090 next_second: 2015-09-01T19:05:05
305000ms th_a witness.cpp:223 block_production_loo ] Not producing block because slot has not yet arrived
305000ms th_a witness.cpp:187 schedule_production_ ] now.time_since_epoch().count(): 1441134304997093 next_second: 2015-09-01T19:05:06
306000ms th_a witness.cpp:223 block_production_loo ] Not producing block because slot has not yet arrived
306000ms th_a witness.cpp:187 schedule_production_ ] now.time_since_epoch().count(): 1441134305997265 next_second: 2015-09-01T19:05:07
307000ms th_a witness.cpp:223 block_production_loo ] Not producing block because slot has not yet arrived
307000ms th_a witness.cpp:187 schedule_production_ ] now.time_since_epoch().count(): 1441134306997192 next_second: 2015-09-01T19:05:08
308000ms th_a witness.cpp:223 block_production_loo ] Not producing block because slot has not yet arrived
308000ms th_a witness.cpp:187 schedule_production_ ] now.time_since_epoch().count(): 1441134307997036 next_second: 2015-09-01T19:05:09
309000ms th_a witness.cpp:223 block_production_loo ] Not producing block because slot has not yet arrived
309000ms th_a witness.cpp:187 schedule_production_ ] now.time_since_epoch().count(): 1441134308997313 next_second: 2015-09-01T19:05:10
310000ms th_a witness.cpp:223 block_production_loo ] Not producing block because slot has not yet arrived
310000ms th_a witness.cpp:187 schedule_production_ ] now.time_since_epoch().count(): 1441134309997066 next_second: 2015-09-01T19:05:11
311000ms th_a witness.cpp:223 block_production_loo ] Not producing block because slot has not yet arrived
311000ms th_a witness.cpp:187 schedule_production_ ] now.time_since_epoch().count(): 1441134310997084 next_second: 2015-09-01T19:05:12
312000ms th_a witness.cpp:223 block_production_loo ] Not producing block because slot has not yet arrived
312000ms th_a witness.cpp:187 schedule_production_ ] now.time_since_epoch().count(): 1441134311997095 next_second: 2015-09-01T19:05:13
313000ms th_a witness.cpp:223 block_production_loo ] Not producing block because slot has not yet arrived
313000ms th_a witness.cpp:187 schedule_production_ ] now.time_since_epoch().count(): 1441134312997001 next_second: 2015-09-01T19:05:14
314000ms th_a witness.cpp:223 block_production_loo ] Not producing block because slot has not yet arrived
314000ms th_a witness.cpp:187 schedule_production_ ] now.time_since_epoch().count(): 1441134313997077 next_second: 2015-09-01T19:05:15
315000ms th_a witness.cpp:223 block_production_loo ] Not producing block because slot has not yet arrived
315000ms th_a witness.cpp:187 schedule_production_ ] now.time_since_epoch().count(): 1441134314997092 next_second: 2015-09-01T19:05:16
316000ms th_a witness.cpp:223 block_production_loo ] Not producing block because slot has not yet arrived
316000ms th_a witness.cpp:187 schedule_production_ ] now.time_since_epoch().count(): 1441134315997044 next_second: 2015-09-01T19:05:17
317000ms th_a witness.cpp:223 block_production_loo ] Not producing block because slot has not yet arrived
317000ms th_a witness.cpp:187 schedule_production_ ] now.time_since_epoch().count(): 1441134316997130 next_second: 2015-09-01T19:05:18
318000ms th_a witness.cpp:223 block_production_loo ] Not producing block because slot has not yet arrived
318000ms th_a witness.cpp:187 schedule_production_ ] now.time_since_epoch().count(): 1441134317997088 next_second: 2015-09-01T19:05:19
319000ms th_a witness.cpp:223 block_production_loo ] Not producing block because slot has not yet arrived
319000ms th_a witness.cpp:187 schedule_production_ ] now.time_since_epoch().count(): 1441134318997059 next_second: 2015-09-01T19:05:20
320000ms th_a witness.cpp:220 block_production_loo ] Not producing block because it isn't my turn
320000ms th_a witness.cpp:187 schedule_production_ ] now.time_since_epoch().count(): 1441134319997193 next_second: 2015-09-01T19:05:21
320276ms th_a application.cpp:356 handle_block ] Got block #15326 from network
320277ms th_a api.cpp:867 on_objects_changed ] ids: ["2.8.15326","1.6.20","1.13.127","2.1.0"]
320277ms th_a api.cpp:867 on_objects_changed ] ids: ["2.8.15326","1.6.20","1.13.127","2.1.0"]
Sure it is.Why does the following happen? Sometimes it takes about 20 seconds to sync the blockI think that's the maintenance period.
I agree. I'm guessing they just haven't updated it yet.Sure it is.Why does the following happen? Sometimes it takes about 20 seconds to sync the blockI think that's the maintenance period.
But I don't think it need so much time (3 block intervals) for maintenance when the block interval is set to 5s.
1053509ms th_a witness.cpp:156 plugin_startup ] Unable to find key for witness 1.6.1595. Removing it from my witnesses.
1053509ms th_a witness.cpp:171 plugin_startup ] No witnesses configured! Please add witness IDs and private keys to configuration.
1053509ms th_a witness.cpp:172 plugin_startup ] witness plugin: plugin_startup() end
1053509ms th_a main.cpp:165 main ] Started witness node on a chain with 16114 blocks.
1053509ms th_a main.cpp:166 main ] Chain ID is ecbde738ba0b319cb4d266e613b200d010da8b37313c20aec03f9c8e2d9b35e3
922790ms th_a witness.cpp:156 plugin_startup ] Unable to find key for witness 1.6.2561. Removing it from my witnesses.
922790ms th_a witness.cpp:171 plugin_startup ] No witnesses configured! Please add witness IDs and private keys to configuration.
922790ms th_a witness.cpp:172 plugin_startup ] witness plugin: plugin_startup() end
922790ms th_a main.cpp:165 main ] Started witness node on a chain with 16114 blocks.
922790ms th_a main.cpp:166 main ] Chain ID is ecbde738ba0b319cb4d266e613b200d010da8b37313c20aec03f9c8e2d9b35e3
I agree. I'm guessing they just haven't updated it yet.Sure it is.Why does the following happen? Sometimes it takes about 20 seconds to sync the blockI think that's the maintenance period.
But I don't think it need so much time (3 block intervals) for maintenance when the block interval is set to 5s.
I think I have missed sometiing - getting tihs errorCode: [Select]1053509ms th_a witness.cpp:156 plugin_startup ] Unable to find key for witness 1.6.1595. Removing it from my witnesses.
1053509ms th_a witness.cpp:171 plugin_startup ] No witnesses configured! Please add witness IDs and private keys to configuration.
1053509ms th_a witness.cpp:172 plugin_startup ] witness plugin: plugin_startup() end
1053509ms th_a main.cpp:165 main ] Started witness node on a chain with 16114 blocks.
1053509ms th_a main.cpp:166 main ] Chain ID is ecbde738ba0b319cb4d266e613b200d010da8b37313c20aec03f9c8e2d9b35e3
andCode: [Select]922790ms th_a witness.cpp:156 plugin_startup ] Unable to find key for witness 1.6.2561. Removing it from my witnesses.
922790ms th_a witness.cpp:171 plugin_startup ] No witnesses configured! Please add witness IDs and private keys to configuration.
922790ms th_a witness.cpp:172 plugin_startup ] witness plugin: plugin_startup() end
922790ms th_a main.cpp:165 main ] Started witness node on a chain with 16114 blocks.
922790ms th_a main.cpp:166 main ] Chain ID is ecbde738ba0b319cb4d266e613b200d010da8b37313c20aec03f9c8e2d9b35e3
IDs and private keys are the same as the last test net.
Does the wallet have to be on the server if I use confi.ini ?
Sure it is.Why does the following happen? Sometimes it takes about 20 seconds to sync the blockI think that's the maintenance period.
But I don't think it need so much time (3 block intervals) for maintenance when the block interval is set to 5s.
I think I have missed sometiing - getting tihs errorCode: [Select]1053509ms th_a witness.cpp:156 plugin_startup ] Unable to find key for witness 1.6.1595. Removing it from my witnesses.
1053509ms th_a witness.cpp:171 plugin_startup ] No witnesses configured! Please add witness IDs and private keys to configuration.
1053509ms th_a witness.cpp:172 plugin_startup ] witness plugin: plugin_startup() end
1053509ms th_a main.cpp:165 main ] Started witness node on a chain with 16114 blocks.
1053509ms th_a main.cpp:166 main ] Chain ID is ecbde738ba0b319cb4d266e613b200d010da8b37313c20aec03f9c8e2d9b35e3
andCode: [Select]922790ms th_a witness.cpp:156 plugin_startup ] Unable to find key for witness 1.6.2561. Removing it from my witnesses.
922790ms th_a witness.cpp:171 plugin_startup ] No witnesses configured! Please add witness IDs and private keys to configuration.
922790ms th_a witness.cpp:172 plugin_startup ] witness plugin: plugin_startup() end
922790ms th_a main.cpp:165 main ] Started witness node on a chain with 16114 blocks.
922790ms th_a main.cpp:166 main ] Chain ID is ecbde738ba0b319cb4d266e613b200d010da8b37313c20aec03f9c8e2d9b35e3
IDs and private keys are the same as the last test net.
Does the wallet have to be on the server if I use confi.ini ?
Check your witness ID by using get_witness. Mine also has been changed from the last snapshot.
delegate-1.lafona (1.6.1531) is up and ready. Would someone mind voting me in, I am having no luck finding a key with a balance worth more than 0.1 core.
2971563ms th_a application.cpp:356 handle_block ] Got block #19244 from network
2971564ms th_a api.cpp:867 on_objects_changed ] ids: ["2.8.19244","1.6.1","1.13.63","2.1.0"]
2971564ms th_a api.cpp:867 on_objects_changed ] ids: ["2.8.19244","1.6.1","1.13.63","2.1.0"]
2975337ms th_a application.cpp:356 handle_block ] Got block #19246 from network
2975337ms th_a fork_database.cpp:57 push_block ] Pushing block to fork database that failed to link.
2976894ms th_a application.cpp:356 handle_block ] Got block #19247 from network
2976895ms th_a fork_database.cpp:57 push_block ] Pushing block to fork database that failed to link.
2977641ms th_a application.cpp:356 handle_block ] Got block #19248 from network
2977641ms th_a fork_database.cpp:57 push_block ] Pushing block to fork database that failed to link.
2982830ms th_a application.cpp:356 handle_block ] Got block #19249 from network
2982830ms th_a fork_database.cpp:57 push_block ] Pushing block to fork database that failed to link.
2985413ms th_a application.cpp:356 handle_block ] Got block #19250 from network
2985413ms th_a fork_database.cpp:57 push_block ] Pushing block to fork database that failed to link.
2991159ms th_a application.cpp:356 handle_block ] Got block #19251 from network
2991159ms th_a fork_database.cpp:57 push_block ] Pushing block to fork database that failed to link.
.......
75407ms th_a application.cpp:356 handle_block ] Got block #19379 from network
75407ms th_a fork_database.cpp:57 push_block ] Pushing block to fork database that failed to link.
80415ms th_a application.cpp:356 handle_block ] Got block #19380 from network
80415ms th_a fork_database.cpp:57 push_block ] Pushing block to fork database that failed to link.
1550122ms th_a application.cpp:500 get_blockchain_synop ] reference_point: 0000000000000000000000000000000000000000 number_of_blocks_after_reference_point: 0 result: ["00000b2c27d70d418df26d4c6f7ea9c3bf278881","00002b2c4baf627959890ae6fcb7fa58a0a1fd89","00003b2c9362d330e2c2fc81702d2e365fac966f","0000432c4c5baebfdd5189f12edf44c1dd7c2d93","0000472c1bef0cf45f0e0be7fe2a60d34a83711c","0000492c197c3deeb618c982d64acdb26aa5427e","00004a2cfc255375ac702afb59a3c0516e77675f","00004aac0824351772902693a572330a2f5b5562","00004aec708a8a7265569807cc0aaf2907bb01f9","00004b0c8a04e3876fc3487ebef34d96b1e946d6","00004b1c50342d1c658d156a835e808f67a336fc","00004b247ee7327a4b4088921aa8317db8d009e5","00004b28f364f49e3e1c942a6368c8d190eba2a9","00004b2a5a9540578d4a3a03eac584f3acf88383","00004b2bad1237b1cf7c26dee14009b8812f4400","00004b2c721c0a7d372e05e78a846636be3c56e6"]
1551317ms th_a application.cpp:378 handle_block ] Error when pushing block:
10 assert_exception: Assert Exception
second_branch_itr != _index.get<block_id>().end():
{}
th_a fork_database.cpp:194 fetch_branch_from
{"first":"00004bb44c5ea6f09c4523f59c1ee2d4d4c46795","second":"00004b2c721c0a7d372e05e78a846636be3c56e6"}
th_a fork_database.cpp:225 fetch_branch_from
{"new_block":{"previous":"00004b2c721c0a7d372e05e78a846636be3c56e6","timestamp":"2015-09-02T00:49:20","witness":"1.6.30","transaction_merkle_root":"0000000000000000000000000000000000000000","extensions":[],"witness_signature":"1f06a8c6497711b502229c690492345f8df531e3b88916acaed6cc69dba4ef31a0234bac5808e0c550f9b3c9f0627791c663fe07aee3d60d4f5777fb2f83fbf2d0","transactions":[]}}
th_a db_block.cpp:176 _push_block
1551320ms th_a application.cpp:378 handle_block ] Error when pushing block:
10 assert_exception: Assert Exception
item->num > std::max<int64_t>( 0, int64_t(_head->num) - (_max_size) ): attempting to push a block that is too old
{"item->num":19246,"head":19380,"max_size":10}
th_a fork_database.cpp:69 _push_block
{"new_block":{"previous":"00004b2d2c2ca421c5c1dea971536d794fb39909","timestamp":"2015-09-02T00:49:25","witness":"1.6.71","transaction_merkle_root":"0000000000000000000000000000000000000000","extensions":[],"witness_signature":"2000f6e8c6e35dfe28afdd202990416a3575852211b7d0b2872f5b69127da88bb9232179133fbaae82fb9c7af4d115552703ac668f9cdce4f103733099994e6ff6","transactions":[]}}
th_a db_block.cpp:176 _push_block
1551320ms th_a application.cpp:378 handle_block ] Error when pushing block:
10 assert_exception: Assert Exception
item->num > std::max<int64_t>( 0, int64_t(_head->num) - (_max_size) ): attempting to push a block that is too old
{"item->num":19247,"head":19380,"max_size":10}
th_a fork_database.cpp:69 _push_block
{"new_block":{"previous":"00004b2e2a410dbcbf7d98869f49d3965c4ff628","timestamp":"2015-09-02T00:49:30","witness":"1.6.92","transaction_merkle_root":"0000000000000000000000000000000000000000","extensions":[],"witness_signature":"2063834aa931f46e4650b465f130c88c489666e89f6635a5226f059185e6b1eeca4c12d552b3da728c58e4dc41aec3b425e0636e9ff408782371c17346e24203d1","transactions":[]}}
th_a db_block.cpp:176 _push_block
1551320ms th_a application.cpp:378 handle_block ] Error when pushing block:
10 assert_exception: Assert Exception
item->num > std::max<int64_t>( 0, int64_t(_head->num) - (_max_size) ): attempting to push a block that is too old
{"item->num":19248,"head":19380,"max_size":10}
th_a fork_database.cpp:69 _push_block
{"new_block":{"previous":"00004b2fbf3e595635934a9d1e0bc71c361650b8","timestamp":"2015-09-02T00:49:35","witness":"1.6.21","transaction_merkle_root":"0000000000000000000000000000000000000000","extensions":[],"witness_signature":"2050531102e02418d6369765e238355619d025960629043d2bf86af1b95a2c18336053cd369119a3b253fdca0c700d686187e7befaccfc4377f464c7f41987c64c","transactions":[]}}
th_a db_block.cpp:176 _push_block
1551321ms th_a application.cpp:378 handle_block ] Error when pushing block:
10 assert_exception: Assert Exception
item->num > std::max<int64_t>( 0, int64_t(_head->num) - (_max_size) ): attempting to push a block that is too old
{"item->num":19249,"head":19380,"max_size":10}
th_a fork_database.cpp:69 _push_block
{"new_block":{"previous":"00004b30452602f87ae249ca2b484965ff9bce20","timestamp":"2015-09-02T00:49:40","witness":"1.6.13","transaction_merkle_root":"0000000000000000000000000000000000000000","extensions":[],"witness_signature":"2007094c94e1f988276184214c83c584994974914d5b052e5c4060f2ad1b4baed634b6249c604ef8ffac4af06f8e92524aa4d399abe617cd14195175f947e67a4b","transactions":[]}}
th_a db_block.cpp:176 _push_block
1551321ms th_a application.cpp:378 handle_block ] Error when pushing block:
10 assert_exception: Assert Exception
item->num > std::max<int64_t>( 0, int64_t(_head->num) - (_max_size) ): attempting to push a block that is too old
{"item->num":19250,"head":19380,"max_size":10}
th_a fork_database.cpp:69 _push_block
{"new_block":{"previous":"00004b31be06fe671e4b289061961220d83b45d3","timestamp":"2015-09-02T00:49:45","witness":"1.6.54","transaction_merkle_root":"0000000000000000000000000000000000000000","extensions":[],"witness_signature":"20708f798868d17b3237e6e79a19e19397ba716c97cc92c881deb03190879904b04ffe7b3e8b50fab1ba20bd75b815bfd3ead8c5f407c480b7a2dd378faa369a74","transactions":[]}}
th_a db_block.cpp:176 _push_block
...................
tem->num > std::max<int64_t>( 0, int64_t(_head->num) - (_max_size) ): attempting to push a block that is too old
{"item->num":19254,"head":19380,"max_size":10}
th_a fork_database.cpp:69 _push_block
{"new_block":{"previous":"00004b356ca14f441e66d190ea1316a710fbc7c6","timestamp":"2015-09-02T00:50:20","witness":"1.6.19","transaction_merkle_root":"0000000000000000000000000000000000000000","extensions":[],"witness_signature":"206ce785ed9cac4a4eff9d2e3c6b5c7a1eefbbd6da7a00e9ad08c4b23aa16c130940e2de4421de0a7ff654cd7b75946aaa07de0bbc34a8ffe14634746b5e8b6d78","transactions":[]}}
th_a db_block.cpp:176 _push_block
1817912ms ntp ntp.cpp:77 request_now ] resolving... ["pool.ntp.org",123]
1817919ms ntp ntp.cpp:81 request_now ] sending request to 202.118.1.130:123
1817962ms ntp ntp.cpp:147 read_loop ] received ntp reply from 202.118.1.130:123
1817962ms ntp ntp.cpp:161 read_loop ] ntp offset: 618, round_trip_delay 43201
1817962ms ntp ntp.cpp:177 read_loop ] ntp_delta_time updated to 618
1860115ms th_a application.cpp:500 get_blockchain_synop ] reference_point: 0000000000000000000000000000000000000000 number_of_blocks_after_reference_point: 0 result: ["00000b2c27d70d418df26d4c6f7ea9c3bf278881","00002b2c4baf627959890ae6fcb7fa58a0a1fd89","00003b2c9362d330e2c2fc81702d2e365fac966f","0000432c4c5baebfdd5189f12edf44c1dd7c2d93","0000472c1bef0cf45f0e0be7fe2a60d34a83711c","0000492c197c3deeb618c982d64acdb26aa5427e","00004a2cfc255375ac702afb59a3c0516e77675f","00004aac0824351772902693a572330a2f5b5562","00004aec708a8a7265569807cc0aaf2907bb01f9","00004b0c8a04e3876fc3487ebef34d96b1e946d6","00004b1c50342d1c658d156a835e808f67a336fc","00004b247ee7327a4b4088921aa8317db8d009e5","00004b28f364f49e3e1c942a6368c8d190eba2a9","00004b2a5a9540578d4a3a03eac584f3acf88383","00004b2bad1237b1cf7c26dee14009b8812f4400","00004b2c721c0a7d372e05e78a846636be3c56e6"]
1861225ms th_a application.cpp:378 handle_block ] Error when pushing block:
10 assert_exception: Assert Exception
item->num > std::max<int64_t>( 0, int64_t(_head->num) - (_max_size) ): attempting to push a block that is too old
{"item->num":19245,"head":19380,"max_size":10}
th_a fork_database.cpp:69 _push_block
{"new_block":{"previous":"00004b2c721c0a7d372e05e78a846636be3c56e6","timestamp":"2015-09-02T00:49:20","witness":"1.6.30","transaction_merkle_root":"0000000000000000000000000000000000000000","extensions":[],"witness_signature":"1f06a8c6497711b502229c690492345f8df531e3b88916acaed6cc69dba4ef31a0234bac5808e0c550f9b3c9f0627791c663fe07aee3d60d4f5777fb2f83fbf2d0","transactions":[]}}
th_a db_block.cpp:176 _push_block
1861226ms th_a application.cpp:378 handle_block ] Error when pushing block:
10 assert_exception: Assert Exception
item->num > std::max<int64_t>( 0, int64_t(_head->num) - (_max_size) ): attempting to push a block that is too old
{"item->num":19246,"head":19380,"max_size":10}
th_a fork_database.cpp:69 _push_block
Still have same problem with test2 tag.several times, my witeness node blocked at some block
I watch the p2p log, it received block 419543, then 419544, then missed 419545, then received 419546, .......
and it will never receive block 419545, I have to restart witness node, then it can request the block from other node.
I think we need a logic, when the witness node have missed a block more than 10 blocks, it should request from other node.
Eric checked in some fixes that may address this particular problem. We do have logic to request from another node, but the timeouts were still tuned to BTS 1 timeframes so we reduced the timeout for BTS 2.
Please check this issue https://github.com/cryptonomex/graphene/issues/264
In the test2b tag I have increase the allowance for out-of-order blocks. If it still happens then something else is going on.
schedule_production_ ] now.time_since_epoch().count(): 1441163103861438 next_second: 2015-09-02T03:05:05
represent a missed block247252ms th_a application.cpp:356 handle_block ] Got block #20775 from network
252263ms th_a application.cpp:356 handle_block ] Got block #20776 from network
257272ms th_a application.cpp:356 handle_block ] Got block #20777 from network
262655ms th_a application.cpp:356 handle_block ] Got block #20778 from network
266000ms th_a witness.cpp:223 block_production_loo ] Not producing block because slot has not yet arrived
266000ms th_a witness.cpp:187 schedule_production_ ] now.time_since_epoch().count(): 1441163103861438 next_second: 2015-09-02T03:05:05
282256ms th_a application.cpp:356 handle_block ] Got block #20779 from network
287257ms th_a application.cpp:356 handle_block ] Got block #20780 from network
292260ms th_a application.cpp:356 handle_block ] Got block #20781 from network
297254ms th_a application.cpp:356 handle_block ] Got block #20782 from network
Code: [Select]292260ms th_a application.cpp:356 handle_block ] Got block #20781 from network
297254ms th_a application.cpp:356 handle_block ] Got block #20782 from network
I fixed my stats page for the new testnet. It is still very basic but serves its purpose:
http://stats.bitshares.eu/ (http://stats.bitshares.eu/)
it's written in plain JS/HTML/CSS and most web devs would consider the code "ugly". Anyway, you can find it here:
https://github.com/BitSharesEurope/stats.bitshares.eu (https://github.com/BitSharesEurope/stats.bitshares.eu)
I fixed my stats page for the new testnet. It is still very basic but serves its purpose:
http://stats.bitshares.eu/
it's written in plain JS/HTML/CSS and most web devs would consider the code "ugly". Anyway, you can find it here:
https://github.com/BitSharesEurope/stats.bitshares.eu
happen at block 19245Code: [Select]2971563ms th_a application.cpp:356 handle_block ] Got block #19244 from network
2971564ms th_a api.cpp:867 on_objects_changed ] ids: ["2.8.19244","1.6.1","1.13.63","2.1.0"]
2971564ms th_a api.cpp:867 on_objects_changed ] ids: ["2.8.19244","1.6.1","1.13.63","2.1.0"]
2975337ms th_a application.cpp:356 handle_block ] Got block #19246 from network
2975337ms th_a fork_database.cpp:57 push_block ] Pushing block to fork database that failed to link.
2976894ms th_a application.cpp:356 handle_block ] Got block #19247 from network
2976895ms th_a fork_database.cpp:57 push_block ] Pushing block to fork database that failed to link.
2977641ms th_a application.cpp:356 handle_block ] Got block #19248 from network
2977641ms th_a fork_database.cpp:57 push_block ] Pushing block to fork database that failed to link.
2982830ms th_a application.cpp:356 handle_block ] Got block #19249 from network
2982830ms th_a fork_database.cpp:57 push_block ] Pushing block to fork database that failed to link.
2985413ms th_a application.cpp:356 handle_block ] Got block #19250 from network
2985413ms th_a fork_database.cpp:57 push_block ] Pushing block to fork database that failed to link.
2991159ms th_a application.cpp:356 handle_block ] Got block #19251 from network
2991159ms th_a fork_database.cpp:57 push_block ] Pushing block to fork database that failed to link.
.......
75407ms th_a application.cpp:356 handle_block ] Got block #19379 from network
75407ms th_a fork_database.cpp:57 push_block ] Pushing block to fork database that failed to link.
80415ms th_a application.cpp:356 handle_block ] Got block #19380 from network
80415ms th_a fork_database.cpp:57 push_block ] Pushing block to fork database that failed to link.
1550122ms th_a application.cpp:500 get_blockchain_synop ] reference_point: 0000000000000000000000000000000000000000 number_of_blocks_after_reference_point: 0 result: ["00000b2c27d70d418df26d4c6f7ea9c3bf278881","00002b2c4baf627959890ae6fcb7fa58a0a1fd89","00003b2c9362d330e2c2fc81702d2e365fac966f","0000432c4c5baebfdd5189f12edf44c1dd7c2d93","0000472c1bef0cf45f0e0be7fe2a60d34a83711c","0000492c197c3deeb618c982d64acdb26aa5427e","00004a2cfc255375ac702afb59a3c0516e77675f","00004aac0824351772902693a572330a2f5b5562","00004aec708a8a7265569807cc0aaf2907bb01f9","00004b0c8a04e3876fc3487ebef34d96b1e946d6","00004b1c50342d1c658d156a835e808f67a336fc","00004b247ee7327a4b4088921aa8317db8d009e5","00004b28f364f49e3e1c942a6368c8d190eba2a9","00004b2a5a9540578d4a3a03eac584f3acf88383","00004b2bad1237b1cf7c26dee14009b8812f4400","00004b2c721c0a7d372e05e78a846636be3c56e6"]
1551317ms th_a application.cpp:378 handle_block ] Error when pushing block:
10 assert_exception: Assert Exception
second_branch_itr != _index.get<block_id>().end():
{}
th_a fork_database.cpp:194 fetch_branch_from
{"first":"00004bb44c5ea6f09c4523f59c1ee2d4d4c46795","second":"00004b2c721c0a7d372e05e78a846636be3c56e6"}
th_a fork_database.cpp:225 fetch_branch_from
{"new_block":{"previous":"00004b2c721c0a7d372e05e78a846636be3c56e6","timestamp":"2015-09-02T00:49:20","witness":"1.6.30","transaction_merkle_root":"0000000000000000000000000000000000000000","extensions":[],"witness_signature":"1f06a8c6497711b502229c690492345f8df531e3b88916acaed6cc69dba4ef31a0234bac5808e0c550f9b3c9f0627791c663fe07aee3d60d4f5777fb2f83fbf2d0","transactions":[]}}
th_a db_block.cpp:176 _push_block
1551320ms th_a application.cpp:378 handle_block ] Error when pushing block:
10 assert_exception: Assert Exception
item->num > std::max<int64_t>( 0, int64_t(_head->num) - (_max_size) ): attempting to push a block that is too old
{"item->num":19246,"head":19380,"max_size":10}
th_a fork_database.cpp:69 _push_block
{"new_block":{"previous":"00004b2d2c2ca421c5c1dea971536d794fb39909","timestamp":"2015-09-02T00:49:25","witness":"1.6.71","transaction_merkle_root":"0000000000000000000000000000000000000000","extensions":[],"witness_signature":"2000f6e8c6e35dfe28afdd202990416a3575852211b7d0b2872f5b69127da88bb9232179133fbaae82fb9c7af4d115552703ac668f9cdce4f103733099994e6ff6","transactions":[]}}
th_a db_block.cpp:176 _push_block
1551320ms th_a application.cpp:378 handle_block ] Error when pushing block:
10 assert_exception: Assert Exception
item->num > std::max<int64_t>( 0, int64_t(_head->num) - (_max_size) ): attempting to push a block that is too old
{"item->num":19247,"head":19380,"max_size":10}
th_a fork_database.cpp:69 _push_block
{"new_block":{"previous":"00004b2e2a410dbcbf7d98869f49d3965c4ff628","timestamp":"2015-09-02T00:49:30","witness":"1.6.92","transaction_merkle_root":"0000000000000000000000000000000000000000","extensions":[],"witness_signature":"2063834aa931f46e4650b465f130c88c489666e89f6635a5226f059185e6b1eeca4c12d552b3da728c58e4dc41aec3b425e0636e9ff408782371c17346e24203d1","transactions":[]}}
th_a db_block.cpp:176 _push_block
1551320ms th_a application.cpp:378 handle_block ] Error when pushing block:
10 assert_exception: Assert Exception
item->num > std::max<int64_t>( 0, int64_t(_head->num) - (_max_size) ): attempting to push a block that is too old
{"item->num":19248,"head":19380,"max_size":10}
th_a fork_database.cpp:69 _push_block
{"new_block":{"previous":"00004b2fbf3e595635934a9d1e0bc71c361650b8","timestamp":"2015-09-02T00:49:35","witness":"1.6.21","transaction_merkle_root":"0000000000000000000000000000000000000000","extensions":[],"witness_signature":"2050531102e02418d6369765e238355619d025960629043d2bf86af1b95a2c18336053cd369119a3b253fdca0c700d686187e7befaccfc4377f464c7f41987c64c","transactions":[]}}
th_a db_block.cpp:176 _push_block
1551321ms th_a application.cpp:378 handle_block ] Error when pushing block:
10 assert_exception: Assert Exception
item->num > std::max<int64_t>( 0, int64_t(_head->num) - (_max_size) ): attempting to push a block that is too old
{"item->num":19249,"head":19380,"max_size":10}
th_a fork_database.cpp:69 _push_block
{"new_block":{"previous":"00004b30452602f87ae249ca2b484965ff9bce20","timestamp":"2015-09-02T00:49:40","witness":"1.6.13","transaction_merkle_root":"0000000000000000000000000000000000000000","extensions":[],"witness_signature":"2007094c94e1f988276184214c83c584994974914d5b052e5c4060f2ad1b4baed634b6249c604ef8ffac4af06f8e92524aa4d399abe617cd14195175f947e67a4b","transactions":[]}}
th_a db_block.cpp:176 _push_block
1551321ms th_a application.cpp:378 handle_block ] Error when pushing block:
10 assert_exception: Assert Exception
item->num > std::max<int64_t>( 0, int64_t(_head->num) - (_max_size) ): attempting to push a block that is too old
{"item->num":19250,"head":19380,"max_size":10}
th_a fork_database.cpp:69 _push_block
{"new_block":{"previous":"00004b31be06fe671e4b289061961220d83b45d3","timestamp":"2015-09-02T00:49:45","witness":"1.6.54","transaction_merkle_root":"0000000000000000000000000000000000000000","extensions":[],"witness_signature":"20708f798868d17b3237e6e79a19e19397ba716c97cc92c881deb03190879904b04ffe7b3e8b50fab1ba20bd75b815bfd3ead8c5f407c480b7a2dd378faa369a74","transactions":[]}}
th_a db_block.cpp:176 _push_block
...................
tem->num > std::max<int64_t>( 0, int64_t(_head->num) - (_max_size) ): attempting to push a block that is too old
{"item->num":19254,"head":19380,"max_size":10}
th_a fork_database.cpp:69 _push_block
{"new_block":{"previous":"00004b356ca14f441e66d190ea1316a710fbc7c6","timestamp":"2015-09-02T00:50:20","witness":"1.6.19","transaction_merkle_root":"0000000000000000000000000000000000000000","extensions":[],"witness_signature":"206ce785ed9cac4a4eff9d2e3c6b5c7a1eefbbd6da7a00e9ad08c4b23aa16c130940e2de4421de0a7ff654cd7b75946aaa07de0bbc34a8ffe14634746b5e8b6d78","transactions":[]}}
th_a db_block.cpp:176 _push_block
1817912ms ntp ntp.cpp:77 request_now ] resolving... ["pool.ntp.org",123]
1817919ms ntp ntp.cpp:81 request_now ] sending request to 202.118.1.130:123
1817962ms ntp ntp.cpp:147 read_loop ] received ntp reply from 202.118.1.130:123
1817962ms ntp ntp.cpp:161 read_loop ] ntp offset: 618, round_trip_delay 43201
1817962ms ntp ntp.cpp:177 read_loop ] ntp_delta_time updated to 618
1860115ms th_a application.cpp:500 get_blockchain_synop ] reference_point: 0000000000000000000000000000000000000000 number_of_blocks_after_reference_point: 0 result: ["00000b2c27d70d418df26d4c6f7ea9c3bf278881","00002b2c4baf627959890ae6fcb7fa58a0a1fd89","00003b2c9362d330e2c2fc81702d2e365fac966f","0000432c4c5baebfdd5189f12edf44c1dd7c2d93","0000472c1bef0cf45f0e0be7fe2a60d34a83711c","0000492c197c3deeb618c982d64acdb26aa5427e","00004a2cfc255375ac702afb59a3c0516e77675f","00004aac0824351772902693a572330a2f5b5562","00004aec708a8a7265569807cc0aaf2907bb01f9","00004b0c8a04e3876fc3487ebef34d96b1e946d6","00004b1c50342d1c658d156a835e808f67a336fc","00004b247ee7327a4b4088921aa8317db8d009e5","00004b28f364f49e3e1c942a6368c8d190eba2a9","00004b2a5a9540578d4a3a03eac584f3acf88383","00004b2bad1237b1cf7c26dee14009b8812f4400","00004b2c721c0a7d372e05e78a846636be3c56e6"]
1861225ms th_a application.cpp:378 handle_block ] Error when pushing block:
10 assert_exception: Assert Exception
item->num > std::max<int64_t>( 0, int64_t(_head->num) - (_max_size) ): attempting to push a block that is too old
{"item->num":19245,"head":19380,"max_size":10}
th_a fork_database.cpp:69 _push_block
{"new_block":{"previous":"00004b2c721c0a7d372e05e78a846636be3c56e6","timestamp":"2015-09-02T00:49:20","witness":"1.6.30","transaction_merkle_root":"0000000000000000000000000000000000000000","extensions":[],"witness_signature":"1f06a8c6497711b502229c690492345f8df531e3b88916acaed6cc69dba4ef31a0234bac5808e0c550f9b3c9f0627791c663fe07aee3d60d4f5777fb2f83fbf2d0","transactions":[]}}
th_a db_block.cpp:176 _push_block
1861226ms th_a application.cpp:378 handle_block ] Error when pushing block:
10 assert_exception: Assert Exception
item->num > std::max<int64_t>( 0, int64_t(_head->num) - (_max_size) ): attempting to push a block that is too old
{"item->num":19246,"head":19380,"max_size":10}
th_a fork_database.cpp:69 _push_blockStill have same problem with test2 tag.several times, my witeness node blocked at some block
I watch the p2p log, it received block 419543, then 419544, then missed 419545, then received 419546, .......
and it will never receive block 419545, I have to restart witness node, then it can request the block from other node.
I think we need a logic, when the witness node have missed a block more than 10 blocks, it should request from other node.
Eric checked in some fixes that may address this particular problem. We do have logic to request from another node, but the timeouts were still tuned to BTS 1 timeframes so we reduced the timeout for BTS 2.
Please check this issue https://github.com/cryptonomex/graphene/issues/264
In the test2b tag I have increase the allowance for out-of-order blocks. If it still happens then something else is going on.
2015-09-02T02:12:15 th_a:invoke handle_block handle_block ] Got block #20187 from network application.cpp:356
2015-09-02T02:12:15 th_a:invoke handle_block on_objects_changed ] ids: ["2.8.20187","1.6.64","1.13.72","2.1.0"]
api.cpp:867
2015-09-02T02:12:15 th_a:invoke handle_block on_objects_changed ] ids: ["2.8.20187","1.6.64","1.13.72","2.1.0"]
api.cpp:867
2015-09-02T02:12:25 th_a:invoke handle_block handle_block ] Got block #20189 from network application.cpp:356
2015-09-02T02:12:25 th_a:invoke handle_block push_block ] Pushing block to fork database that failed to link.
fork_database.cpp:57
2015-09-02T02:12:30 th_a:invoke handle_block handle_block ] Got block #20190 from network application.cpp:356
2015-09-02T02:12:30 th_a:invoke handle_block push_block ] Pushing block to fork database that failed to link.
fork_database.cpp:57
...
...
...
2015-09-02T02:20:22 th_a:invoke handle_block handle_block ] Error when pushing block:
10 assert_exception: Assert Exception
item->num > std::max<int64_t>( 0, int64_t(_head->num) - (_max_size) ): attempting to push a block that is too old
{"item->num":20188,"head":20278,"max_size":10}
th_a fork_database.cpp:69 _push_block
{"new_block":{"previous":"00004edb437dfbc741e818a8a9754a653da79111","timestamp":"2015-09-02T02:12:20","witness":"1.6.3706","transaction_merkle_root":"0000000000000000000000000000000000000000","extensions":[],"witness_signature":"1f0f14d642abd9b0d26e0d2e5960ad5b4e1d340282317fd1948f942e1f857804393bbeca54636b9f3ad68e44f9843648aa403fb5dc13024649d259558db3c119f1","transactions":[]}}
th_a db_block.cpp:176 _push_block application.cpp:378
The latest build of graphene is broken again (master branch, commit 00a2d2dac73188e7e265163dfec04c2a523b8e23).Anyone else getting this when trying to update the submoduleCode: [Select]fatal: reference is not a tree: 80d967a70d21d26d27ef3a1544a177925b2a7bbe
Unable to checkout '80d967a70d21d26d27ef3a1544a177925b2a7bbe' in submodule path 'libraries/fc'
Sorry about that, I goofed -- I forgot to push a commit to fc. Try again, it should be good now.
Looks like someone is broadcasting lots of transactions. Looking good. Keep it up.My statspage is of by a factor of 5 since i assumed 1sec blocks .. need to pull the date from the blockchain on next update
Looks like someone is broadcasting lots of transactions. Looking good. Keep it up.
Looks like someone is broadcasting lots of transactions. Looking good. Keep it up.My statspage is of by a factor of 5 since i assumed 1sec blocks .. need to pull the date from the blockchain on next update
Looks like someone is broadcasting lots of transactions. Looking good. Keep it up.
It's me. The network is much stable than previous testnets. Unfortunately I am only able to make 19 tps (93 txs per block)
Edit: How can I make as many transactions as possible?
Looks like someone is broadcasting lots of transactions. Looking good. Keep it up.
It's me. The network is much stable than previous testnets. Unfortunately I am only able to make 19 tps (93 txs per block)
Edit: How can I make as many transactions as possible?
I think you will need multiple CLI wallets.
How much cpu usage are you using for your witness node and wallet?
Looks like someone is broadcasting lots of transactions. Looking good. Keep it up.
It's me. The network is much stable than previous testnets. Unfortunately I am only able to make 19 tps (93 txs per block)
Edit: How can I make as many transactions as possible?
I think you will need multiple CLI wallets.
How much cpu usage are you using for your witness node and wallet?
Normally, approximately 3%. (1vCPU, 3.75 GB memory)
With a lot of transactions, it goes up to 50%
Looks like someone is broadcasting lots of transactions. Looking good. Keep it up.
It's me. The network is much stable than previous testnets. Unfortunately I am only able to make 19 tps (93 txs per block)
Edit: How can I make as many transactions as possible?
I think you will need multiple CLI wallets.
How much cpu usage are you using for your witness node and wallet?
Normally, approximately 3%. (1vCPU, 3.75 GB memory)
With a lot of transactions, it goes up to 50%
If you can try running to witness nodes and verify that only the one that is being used with your wallet is spiking during the flood attempt. On my side I see almost no CPU usage even when you are flooding.
Interesting. I am just running top through ssh from my VPS, and when I either manually spam transactions or do a flood_network command I see a dramatic increase in the CPU usage of witness_node. Up to 100% (its a dual core VPS, and I think top just references the core its running on) Would there be a better way to test it locally, or would it be better to find a way to sync better between multiple users? IRC, or Mumble?
Interesting. I am just running top through ssh from my VPS, and when I either manually spam transactions or do a flood_network command I see a dramatic increase in the CPU usage of witness_node. Up to 100% (its a dual core VPS, and I think top just references the core its running on) Would there be a better way to test it locally, or would it be better to find a way to sync better between multiple users? IRC, or Mumble?
Whenever I tried to flood_network, I met "Segmentation fault (core dumped)" error. How can you do this command?
unlocked >>> flood_network test34.puppies 200
flood_network test34.puppies 200
3593111ms th_a wallet.cpp:1936 flood_network ] Created 66 accounts in 671 milliseconds
3594438ms th_a wallet.cpp:1946 flood_network ] Transferred to 132 accounts in 1327 milliseconds
3595016ms th_a wallet.cpp:1955 flood_network ] Issued to 66 accounts in 577 milliseconds
null
unlocked >>>
test34.puppies is not a real account, but it seems to error out a lot of you use it with the same name. I have never had it cause a segmentation fault though. The errors are very very long, but not serious enough to kill the client.
Sent some CORE to puppies :)thanks clayop
I used any new name but all gave me the same error. Maybe a bug, but it does not have to be fixed IMO (Will someone use flood_network in real blockchain? ;) )Interesting. I am just running top through ssh from my VPS, and when I either manually spam transactions or do a flood_network command I see a dramatic increase in the CPU usage of witness_node. Up to 100% (its a dual core VPS, and I think top just references the core its running on) Would there be a better way to test it locally, or would it be better to find a way to sync better between multiple users? IRC, or Mumble?
Whenever I tried to flood_network, I met "Segmentation fault (core dumped)" error. How can you do this command?
I am using it asCode: [Select]unlocked >>> flood_network test34.puppies 200
test34.puppies is not a real account, but it seems to error out a lot of you use it with the same name. I have never had it cause a segmentation fault though. The errors are very very long, but not serious enough to kill the client.
flood_network test34.puppies 200
3593111ms th_a wallet.cpp:1936 flood_network ] Created 66 accounts in 671 milliseconds
3594438ms th_a wallet.cpp:1946 flood_network ] Transferred to 132 accounts in 1327 milliseconds
3595016ms th_a wallet.cpp:1955 flood_network ] Issued to 66 accounts in 577 milliseconds
null
unlocked >>>
Yes you're correct.But my VPS's cpu spikes a little bit (from 3% to 30%) when I was spamming from another box.
Edit: I have to repeat my statement. There's no spike.
2015-09-03T00:30:20 th_a:invoke handle_block handle_block ] Got block #34498 from network application.cpp:356
...
...
2015-09-03T00:30:30 th_a:invoke handle_transaction handle_transaction ] Got transaction from network application.cpp:391
2015-09-03T00:30:30 th_a:invoke handle_transaction handle_transaction ] Got transaction from network application.cpp:391
2015-09-03T00:30:30 th_a:invoke handle_block handle_block ] Got block #34500 from network application.cpp:356
2015-09-03T00:30:30 th_a:invoke handle_block push_block ] Pushing block to fork database that failed to link. fork_database.cpp:57
2015-09-03T00:30:35 th_a:invoke handle_transaction handle_transaction ] Got transaction from network application.cpp:391
2015-09-03T00:30:35 th_a:invoke handle_block handle_block ] Got block #34501 from network application.cpp:356
2015-09-03T00:30:35 th_a:invoke handle_block push_block ] Pushing block to fork database that failed to link. fork_database.cpp:57
2015-09-03T00:30:37 th_a:invoke handle_block handle_block ] Got block #34499 from network application.cpp:3
...
...
2015-09-03T00:30:37 th_a:invoke handle_block handle_block ] Error when pushing block:
10 assert_exception: Assert Exception
(skip & skip_transaction_dupe_check) || trx_idx.indices().get<by_trx_id>().find(trx_id) == trx_idx.indices().get<by_trx_id>().end():
{}
th_a db_block.cpp:487 _apply_transaction
{"trx":{"ref_block_num":34497,"ref_block_prefix":2564640669,"expiration":"2015-09-03T00:31:49","operations":[[0,{"fee":{"amount":2000000,"asset_id":"1.3.0"},"from":"1.2.63354","to":"1.2.63355","amount":{"amount":100000,"asset_id":"1.3.0"},"extensions":[]}]],"extensions":[],"signatures":["1f5f41592441c4f40af47b118e80a0c4786a1d768cf6a8d3204d7c30d79d33e971463c4dc78f1c5fc8f2ae7637ec21e8b523b089baef754422fe99108576b343c8"]}}
th_a db_block.cpp:543 _apply_transaction
{"next_block.block_num()":34499}
th_a db_block.cpp:448 _apply_block
...
...
2015-09-03T00:30:37 th_a:invoke handle_transaction handle_transaction ] Got transaction from network application.cpp:391
2015-09-03T00:30:37 th_a:invoke handle_transaction on_objects_changed ] ids: ["2.5.40","2.5.18","2.6.63354","2.7.4698"] api.cpp:867
2015-09-03T00:30:37 th_a:invoke handle_transaction on_objects_changed ] ids: ["2.5.40","2.5.18","2.6.63354","2.7.4698"] api.cpp:867
2015-09-03T00:30:40 th_a:invoke handle_block handle_block ] Got block #34502 from network application.cpp:356
2015-09-03T00:30:40 th_a:invoke handle_block push_block ] Pushing block to fork database that failed to link. fork_database.cpp:57
One of my node got out of sync when you're flooding the network. An assertion failure which is never seen before.
Here is some related log:Code: [Select]2015-09-03T00:30:20 th_a:invoke handle_block handle_block ] Got block #34498 from network application.cpp:356
...
...
2015-09-03T00:30:30 th_a:invoke handle_transaction handle_transaction ] Got transaction from network application.cpp:391
2015-09-03T00:30:30 th_a:invoke handle_transaction handle_transaction ] Got transaction from network application.cpp:391
2015-09-03T00:30:30 th_a:invoke handle_block handle_block ] Got block #34500 from network application.cpp:356
2015-09-03T00:30:30 th_a:invoke handle_block push_block ] Pushing block to fork database that failed to link. fork_database.cpp:57
2015-09-03T00:30:35 th_a:invoke handle_transaction handle_transaction ] Got transaction from network application.cpp:391
2015-09-03T00:30:35 th_a:invoke handle_block handle_block ] Got block #34501 from network application.cpp:356
2015-09-03T00:30:35 th_a:invoke handle_block push_block ] Pushing block to fork database that failed to link. fork_database.cpp:57
2015-09-03T00:30:37 th_a:invoke handle_block handle_block ] Got block #34499 from network application.cpp:3
...
...
2015-09-03T00:30:37 th_a:invoke handle_block handle_block ] Error when pushing block:
10 assert_exception: Assert Exception
(skip & skip_transaction_dupe_check) || trx_idx.indices().get<by_trx_id>().find(trx_id) == trx_idx.indices().get<by_trx_id>().end():
{}
th_a db_block.cpp:487 _apply_transaction
{"trx":{"ref_block_num":34497,"ref_block_prefix":2564640669,"expiration":"2015-09-03T00:31:49","operations":[[0,{"fee":{"amount":2000000,"asset_id":"1.3.0"},"from":"1.2.63354","to":"1.2.63355","amount":{"amount":100000,"asset_id":"1.3.0"},"extensions":[]}]],"extensions":[],"signatures":["1f5f41592441c4f40af47b118e80a0c4786a1d768cf6a8d3204d7c30d79d33e971463c4dc78f1c5fc8f2ae7637ec21e8b523b089baef754422fe99108576b343c8"]}}
th_a db_block.cpp:543 _apply_transaction
{"next_block.block_num()":34499}
th_a db_block.cpp:448 _apply_block
...
...
2015-09-03T00:30:37 th_a:invoke handle_transaction handle_transaction ] Got transaction from network application.cpp:391
2015-09-03T00:30:37 th_a:invoke handle_transaction on_objects_changed ] ids: ["2.5.40","2.5.18","2.6.63354","2.7.4698"] api.cpp:867
2015-09-03T00:30:37 th_a:invoke handle_transaction on_objects_changed ] ids: ["2.5.40","2.5.18","2.6.63354","2.7.4698"] api.cpp:867
2015-09-03T00:30:40 th_a:invoke handle_block handle_block ] Got block #34502 from network application.cpp:356
2015-09-03T00:30:40 th_a:invoke handle_block push_block ] Pushing block to fork database that failed to link. fork_database.cpp:57
Full log files are here:
https://drive.google.com/open?id=0B3xrm70jSHn4U3h3V3liOUlOakk
https://drive.google.com/open?id=0B3xrm70jSHn4TmtNYWNaVm1yalU
The other node works fine.
Yes I have a 500G disk mounted, now used 110G.One of my node got out of sync when you're flooding the network. An assertion failure which is never seen before.
Here is some related log:Code: [Select]2015-09-03T00:30:20 th_a:invoke handle_block handle_block ] Got block #34498 from network application.cpp:356
...
...
2015-09-03T00:30:30 th_a:invoke handle_transaction handle_transaction ] Got transaction from network application.cpp:391
2015-09-03T00:30:30 th_a:invoke handle_transaction handle_transaction ] Got transaction from network application.cpp:391
2015-09-03T00:30:30 th_a:invoke handle_block handle_block ] Got block #34500 from network application.cpp:356
2015-09-03T00:30:30 th_a:invoke handle_block push_block ] Pushing block to fork database that failed to link. fork_database.cpp:57
2015-09-03T00:30:35 th_a:invoke handle_transaction handle_transaction ] Got transaction from network application.cpp:391
2015-09-03T00:30:35 th_a:invoke handle_block handle_block ] Got block #34501 from network application.cpp:356
2015-09-03T00:30:35 th_a:invoke handle_block push_block ] Pushing block to fork database that failed to link. fork_database.cpp:57
2015-09-03T00:30:37 th_a:invoke handle_block handle_block ] Got block #34499 from network application.cpp:3
...
...
2015-09-03T00:30:37 th_a:invoke handle_block handle_block ] Error when pushing block:
10 assert_exception: Assert Exception
(skip & skip_transaction_dupe_check) || trx_idx.indices().get<by_trx_id>().find(trx_id) == trx_idx.indices().get<by_trx_id>().end():
{}
th_a db_block.cpp:487 _apply_transaction
{"trx":{"ref_block_num":34497,"ref_block_prefix":2564640669,"expiration":"2015-09-03T00:31:49","operations":[[0,{"fee":{"amount":2000000,"asset_id":"1.3.0"},"from":"1.2.63354","to":"1.2.63355","amount":{"amount":100000,"asset_id":"1.3.0"},"extensions":[]}]],"extensions":[],"signatures":["1f5f41592441c4f40af47b118e80a0c4786a1d768cf6a8d3204d7c30d79d33e971463c4dc78f1c5fc8f2ae7637ec21e8b523b089baef754422fe99108576b343c8"]}}
th_a db_block.cpp:543 _apply_transaction
{"next_block.block_num()":34499}
th_a db_block.cpp:448 _apply_block
...
...
2015-09-03T00:30:37 th_a:invoke handle_transaction handle_transaction ] Got transaction from network application.cpp:391
2015-09-03T00:30:37 th_a:invoke handle_transaction on_objects_changed ] ids: ["2.5.40","2.5.18","2.6.63354","2.7.4698"] api.cpp:867
2015-09-03T00:30:37 th_a:invoke handle_transaction on_objects_changed ] ids: ["2.5.40","2.5.18","2.6.63354","2.7.4698"] api.cpp:867
2015-09-03T00:30:40 th_a:invoke handle_block handle_block ] Got block #34502 from network application.cpp:356
2015-09-03T00:30:40 th_a:invoke handle_block push_block ] Pushing block to fork database that failed to link. fork_database.cpp:57
Full log files are here:
https://drive.google.com/open?id=0B3xrm70jSHn4U3h3V3liOUlOakkhttps://drive.google.com/open?id=0B3xrm70jSHn4TmtNYWNaVm1yalU
https://drive.google.com/open?id=0B3xrm70jSHn4SURQZjBZaE1rVGc
The other node works fine.
Do you have enough disk space now?
One of my node got out of sync when you're flooding the network. An assertion failure which is never seen before.Strange. Restarting doesn't help. Will try resync.
Here is some related log:Code: [Select]2015-09-03T00:30:20 th_a:invoke handle_block handle_block ] Got block #34498 from network application.cpp:356
...
...
2015-09-03T00:30:30 th_a:invoke handle_transaction handle_transaction ] Got transaction from network application.cpp:391
2015-09-03T00:30:30 th_a:invoke handle_transaction handle_transaction ] Got transaction from network application.cpp:391
2015-09-03T00:30:30 th_a:invoke handle_block handle_block ] Got block #34500 from network application.cpp:356
2015-09-03T00:30:30 th_a:invoke handle_block push_block ] Pushing block to fork database that failed to link. fork_database.cpp:57
2015-09-03T00:30:35 th_a:invoke handle_transaction handle_transaction ] Got transaction from network application.cpp:391
2015-09-03T00:30:35 th_a:invoke handle_block handle_block ] Got block #34501 from network application.cpp:356
2015-09-03T00:30:35 th_a:invoke handle_block push_block ] Pushing block to fork database that failed to link. fork_database.cpp:57
2015-09-03T00:30:37 th_a:invoke handle_block handle_block ] Got block #34499 from network application.cpp:3
...
...
2015-09-03T00:30:37 th_a:invoke handle_block handle_block ] Error when pushing block:
10 assert_exception: Assert Exception
(skip & skip_transaction_dupe_check) || trx_idx.indices().get<by_trx_id>().find(trx_id) == trx_idx.indices().get<by_trx_id>().end():
{}
th_a db_block.cpp:487 _apply_transaction
{"trx":{"ref_block_num":34497,"ref_block_prefix":2564640669,"expiration":"2015-09-03T00:31:49","operations":[[0,{"fee":{"amount":2000000,"asset_id":"1.3.0"},"from":"1.2.63354","to":"1.2.63355","amount":{"amount":100000,"asset_id":"1.3.0"},"extensions":[]}]],"extensions":[],"signatures":["1f5f41592441c4f40af47b118e80a0c4786a1d768cf6a8d3204d7c30d79d33e971463c4dc78f1c5fc8f2ae7637ec21e8b523b089baef754422fe99108576b343c8"]}}
th_a db_block.cpp:543 _apply_transaction
{"next_block.block_num()":34499}
th_a db_block.cpp:448 _apply_block
...
...
2015-09-03T00:30:37 th_a:invoke handle_transaction handle_transaction ] Got transaction from network application.cpp:391
2015-09-03T00:30:37 th_a:invoke handle_transaction on_objects_changed ] ids: ["2.5.40","2.5.18","2.6.63354","2.7.4698"] api.cpp:867
2015-09-03T00:30:37 th_a:invoke handle_transaction on_objects_changed ] ids: ["2.5.40","2.5.18","2.6.63354","2.7.4698"] api.cpp:867
2015-09-03T00:30:40 th_a:invoke handle_block handle_block ] Got block #34502 from network application.cpp:356
2015-09-03T00:30:40 th_a:invoke handle_block push_block ] Pushing block to fork database that failed to link. fork_database.cpp:57
Full log files are here:
https://drive.google.com/open?id=0B3xrm70jSHn4U3h3V3liOUlOakkhttps://drive.google.com/open?id=0B3xrm70jSHn4TmtNYWNaVm1yalU
https://drive.google.com/open?id=0B3xrm70jSHn4SURQZjBZaE1rVGc
The other node works fine.
2015-09-03T09:46:28 th_a:?unnamed? main ] Started witness node on a chain with 34498 blocks. main.cpp:165
...
2015-09-03T09:48:39 th_a:invoke handle_block push_block ] Pushing block to fork database that failed to link. fork_database.cpp:57
2015-09-03T09:46:30 p2p:message read_loop fetch_next_batch_of_ ] sync: sending a request for the next items after 000086c202f51da427ebf0507404bc9b6d192f96 to peer 176.9.234.167:40936, (full request is ["000006c280cdf25ca122561981716c74045f16fe","000046c2482fc56dda591b0bea7a47ccbd64d6b2","000066c27fc3abd468f7dd5ad1ce8939614d9d64","000076c2a8ad357d929c0967d98ba955521c11b4","00007ec23da7bcf6188fa3e6e57f0216e78cba1a","000082c22a40c3cbc1152bbd18239e9ffca051dc","000084c22add8beb2d06e3adb13c3573a56ae0b4","000085c29de6e5d439166100ab9d809c730e006c","00008642bd45a1a5711a74bacf4e73cfd5ae69cd","00008682ea2e75d79000e7a4c3411368d7a0a74a","000086a273f0cd7e751e91f6b0795486303009a2","000086b28863612b25b49a1c0a95a3525e42529c","000086ba6b31127d85ba6e1a8b5ea1145a0766e0","000086be74b88f7adb24796112e15c72b04a4e59","000086c0a1c19fb2c9b2d8d92f1caf812fda4bde","000086c19d4fdd989e8e7a62830c6f58e73f33be","000086c202f51da427ebf0507404bc9b6d192f96"]) node.cpp:2294
...
...
2015-09-03T09:46:30 p2p:message read_loop on_blockchain_item_i ] sync: received a list of 669 available items from 192.241.198.6:39809 node.cpp:2310
2015-09-03T09:46:30 p2p:message read_loop on_blockchain_item_i ] is_first_item_for_other_peer: false. item_hashes_received.size() = 669 node.cpp:2356
2015-09-03T09:46:30 p2p:message read_loop on_blockchain_item_i ] popping item because delegate has already seen it. peer's last block the delegate has seen is now 000086c202f51da427ebf0507404bc9b6d192f96 (1) node.cpp:2368
2015-09-03T09:46:30 p2p:message read_loop on_blockchain_item_i ] popping item because delegate has already seen it. peer's last block the delegate has seen is now 000086c3bf20d2e5eeb1039f38c221b46880f2f0 (2) node.cpp:2368
2015-09-03T09:46:30 p2p:message read_loop on_blockchain_item_i ] after removing all items we have already seen, item_hashes_received.size() = 667 node.cpp:2371
2015-09-03T09:46:30 p2p:message read_loop trigger_fetch_sync_i ] Triggering fetch sync items loop now node.cpp:1084
2015-09-03T09:46:30 p2p:fetch_sync_items_loop fetch_sync_items_loo ] beginning another iteration of the sync items loop node.cpp:1023
2015-09-03T09:46:30 p2p:fetch_sync_items_loop request_sync_items_f ] requesting 100 item(s) ["000086c4e3da8babd770344cb8db18ee89a9d09a","000086c50ebdc7f4b5ff303fee02b53f3aec5773",...,"00008727dc9a24b61d0d51454278f2282e89672d"] from peer 192.241.198.6:39809 node.cpp:1007
...
...
2015-09-03T09:48:39 p2p:send_sync_block_to_node_delegate send_sync_block_to_n ] Successfully pushed sync block 34500 (id:000086c4e3da8babd770344cb8db18ee89a9d09a) node.cpp:2793
2015-09-03T09:48:39 p2p:send_sync_block_to_node_delegate send_sync_block_to_n ] sync: client accpted the block, we now have only 5937 items left to fetch before we're in sync node.cpp:2831
Interesting. I am just running top through ssh from my VPS, and when I either manually spam transactions or do a flood_network command I see a dramatic increase in the CPU usage of witness_node. Up to 100% (its a dual core VPS, and I think top just references the core its running on) Would there be a better way to test it locally, or would it be better to find a way to sync better between multiple users? IRC, or Mumble?
Whenever I tried to flood_network, I met "Segmentation fault (core dumped)" error. How can you do this command?
http://stats.bitshares.eu/Just installed an upgrade to this ..
that is cool
http://stats.bitshares.eu/Just installed an upgrade to this ..
that is cool
sources are here:
https://github.com/BitSharesEurope/stats.bitshares.eu
definitely more to come once I figured out how trading ops look like
http://stats.bitshares.eu/Just installed an upgrade to this ..
that is cool
sources are here:
https://github.com/BitSharesEurope/stats.bitshares.eu
definitely more to come once I figured out how trading ops look like
Very nice Xeroc. It seems much more stable. It still locks up pretty bad when I am flooding the network, but recovers nicely.Yes .. I noticed that too .. it is currently storing all blocks .. I removed that .. I will definitely need to work more on the scalability ..
3570357ms th_a application.cpp:228 operator() ] Initializing database...
11127ms th_a thread.cpp:95 thread ] name:ntp tid:140570790082304
11137ms ntp ntp.cpp:77 request_now ] resolving... ["pool.ntp.org",123]
11145ms th_a thread.cpp:95 thread ] name:p2p tid:140570760709888
11153ms ntp ntp.cpp:81 request_now ] sending request to 97.107.128.58:123
11160ms th_a application.cpp:117 reset_p2p_node ] Adding seed node 104.236.118.105:1776
11166ms th_a application.cpp:129 reset_p2p_node ] Configured p2p node to listen on 0.0.0.0:34366
11169ms th_a application.cpp:179 reset_websocket_serv ] Configured websocket rpc to listen on 127.0.0.1:8090
11184ms th_a thread.cpp:115 ~thread ] calling quit() on p2p
11184ms th_a thread.cpp:160 quit ] destroying boost thread 140570760709888
11184ms p2p thread.cpp:246 exec ] thread canceled: 9 canceled_exception: Canceled
cancellation reason: [none given]
{"reason":"[none given]"}
p2p thread_d.hpp:463 start_next_fiber
11223ms ntp ntp.cpp:147 read_loop ] received ntp reply from 97.107.128.58:123
11223ms ntp ntp.cpp:161 read_loop ] ntp offset: 1475, round_trip_delay 70059
11223ms ntp ntp.cpp:177 read_loop ] ntp_delta_time updated to 1475
13408ms th_a main.cpp:173 main ] Exiting with error:
13 N11websocketpp9exceptionE: Underlying Transport Error
Underlying Transport Error:
{"what":"Underlying Transport Error"}
th_a application.cpp:182 reset_websocket_server
{}
th_a application.cpp:309 startup
http://stats.bitshares.eu/Just installed an upgrade to this ..
that is cool
sources are here:
https://github.com/BitSharesEurope/stats.bitshares.eu
definitely more to come once I figured out how trading ops look like
Very nice Xeroc. It seems much more stable. It still locks up pretty bad when I am flooding the network, but recovers nicely.
Somehow I think I lost the private signing key because dump_private_keys only shows one set, and it belongs to the account itself. I guess I will have to register a new witness.. unless there's a way to generate a new signing key pair?Btw: How do you read this:Code: [Select]1522956ms th_a witness.cpp:240 block_production_loo ] slot: 1 scheduled_witness: 1.6.1537 scheduled_time: 2015-08-21T16:25:21 now: 2015-08-21T16:25:21
1523956ms th_a witness.cpp:240 block_production_loo ] slot: 2 scheduled_witness: 1.6.56 scheduled_time: 2015-08-21T16:25:22 now: 2015-08-21T16:25:22
1524185ms th_a application.cpp:348 handle_block ] Got block #76718 from network
1524234ms th_a application.cpp:443 get_item ] Request for item {"item_type":1001,"item_hash":"00012bae7445dfa925a756d82e6f68d9be6e20be"}
1524234ms th_a application.cpp:451 get_item ] Serving up block #76718
1524386ms th_a application.cpp:443 get_item ] Request for item {"item_type":1001,"item_hash":"00012bae7445dfa925a756d82e6f68d9be6e20be"}
1524386ms th_a application.cpp:451 get_item ] Serving up block #76718
Did Witness 1.6.1537 produce his block? Why is the init-witness directly "queueing up" or getting another slot then?
I read it as 1.6.1537 missed the slot, and 1.6.56 filled in.
Ben had this problem too. Will look into this.
I have run out of spam.
Anyone know how the fee structure works? How are lifetime member funds refunded? Is there a delay?
I have run out of spam.
Anyone know how the fee structure works? How are lifetime member funds refunded? Is there a delay?
Sent more fund.
Well that was fast. I managed to empty my wallet, and generate some giant log files, but nothing special there.Bam! Good job
Well that was fast. I managed to empty my wallet, and generate some giant log files, but nothing special there.Bam! Good job
get_block 47457
{
"previous": "0000b96067bc8afae17b8e6b9a5db4312848b454",
"timestamp": "2015-09-03T20:49:30",
"witness": "1.6.1527",
"transaction_merkle_root": "00https://bitsharestalk.org/Themes/default/images/bbc/left.gif00000000000000000000000000000000000000",
"extensions": [],
"witness_signature": "1f33614ffcb1e2b63110f6b0ed873a6fca3350ce0f66167505d8b11e60489f44f616201cc003c02dee28160722fcd99b4796929ffd630c1b7ba2c3fc9948d558e4",
"transactions": [],
"block_id": "0000b96116fe0ebf0f0a0afa47d54936287de751",
"signing_key": "0321731744e219f69c9dd5cf43127205da272ca02d7927b4f0a90a33e34a812fee"
}
Is there any easy way to check witness production?
"transaction_merkle_root": "00https://bitsharestalk.org/Themes/default/images/bbc/left.gif00000000000000000000000000000000000000",
Quote"transaction_merkle_root": "00https://bitsharestalk.org/Themes/default/images/bbc/left.gif00000000000000000000000000000000000000",
What is up with this line?
unlocked >>> get_witness dele-puppy
get_witness dele-puppy
{
"id": "1.6.1527",
"witness_account": "1.2.22294",
"last_aslot": 49430,
"signing_key": "GPH75xxKG4ZeztPpnhmFch99smunUWMvDy9mB6Le497vpAA3XUXaD",
"pay_vb": "1.13.163",
"vote_id": "1:1526",
"total_votes": "29174714611",
"url": ""
}
unlocked >>>
It will give you your vesting pay object as "pay_vb" from there you can get_object 1.13.163unlocked >>> get_object 1.13.163
get_object 1.13.163
[{
"id": "1.13.163",
"owner": "1.2.22294",
"balance": {
"amount": 326000000,
"asset_id": "1.3.0"
},
"policy": [
1,{
"vesting_seconds": 86400,
"start_claim": "1970-01-01T00:00:00",
"coin_seconds_earned": "28080000000000",
"coin_seconds_earned_last_update": "2015-09-03T20:56:30"
}
]
}
]
unlocked >>>
I did find a good way of ensuring that your witness is producing blocks.Interesting. :D
If you get_witness <witness_name>Code: [Select]unlocked >>> get_witness dele-puppy
It will give you your vesting pay object as "pay_vb" from there you can get_object 1.13.163
get_witness dele-puppy
{
"id": "1.6.1527",
"witness_account": "1.2.22294",
"last_aslot": 49430,
"signing_key": "GPH75xxKG4ZeztPpnhmFch99smunUWMvDy9mB6Le497vpAA3XUXaD",
"pay_vb": "1.13.163",
"vote_id": "1:1526",
"total_votes": "29174714611",
"url": ""
}
unlocked >>>Code: [Select]unlocked >>> get_object 1.13.163
get_object 1.13.163
[{
"id": "1.13.163",
"owner": "1.2.22294",
"balance": {
"amount": 326000000,
"asset_id": "1.3.0"
},
"policy": [
1,{
"vesting_seconds": 86400,
"start_claim": "1970-01-01T00:00:00",
"coin_seconds_earned": "28080000000000",
"coin_seconds_earned_last_update": "2015-09-03T20:56:30"
}
]
}
]
unlocked >>>
Every time you generate a block the "amount": goes up by 1000000. So you can see that dele-puppy has generated 326 blocks. Not quite real time, but easier than watching the witness_node screen, or stats.bitshares.eu for your number.
get_object 1.13.160
[{
"id": "1.13.160",
"owner": "1.2.38899",
"balance": {
"amount": 379000000,
"asset_id": "1.3.0"
},
"policy": [
1,{
"vesting_seconds": 86400,
"start_claim": "1970-01-01T00:00:00",
"coin_seconds_earned": "32659200000000",
"coin_seconds_earned_last_update": "2015-09-03T21:09:00"
}
]
}
]
get_object 1.13.162
[{
"id": "1.13.162",
"owner": "1.2.22310",
"balance": {
"amount": 174000000,
"asset_id": "1.3.0"
},
"policy": [
1,{
"vesting_seconds": 86400,
"start_claim": "1970-01-01T00:00:00",
"coin_seconds_earned": "13755490000000",
"coin_seconds_earned_last_update": "2015-09-03T21:19:15"
}
]
}
]
Reminder thatcan u write a little tut on how to start a witness?
apt-add-repository ppa:showard314/ppa
has graphene Ubuntu packages built daily off of master in case you're a lurker that wants to jump in and give it a shot
"lifetime_fees_paid": "38644604445",
I just need more CORE to keep spammingCode: [Select]"lifetime_fees_paid": "38644604445",
puppies pleaseI just need more CORE to keep spammingCode: [Select]"lifetime_fees_paid": "38644604445",
Which account should I send you core on?
500K core has been sent your way.puppies pleaseI just need more CORE to keep spammingCode: [Select]"lifetime_fees_paid": "38644604445",
Which account should I send you core on?
get_object 1.13.171
[{
"id": "1.13.171",
"owner": "1.2.72728",
"balance": {
"amount": 165000000,
"asset_id": "1.3.0"
},
"policy": [
1,{
"vesting_seconds": 86400,
"start_claim": "1970-01-01T00:00:00",
"coin_seconds_earned": "9032250000000",
"coin_seconds_earned_last_update": "2015-09-03T22:04:05"
}
]
}
]
48340 '1.6.70' 'init69'
48339 '1.6.18' 'init17'
48338 '1.6.4232' 'spartako'
48337 '1.6.3968' 'riverhead'
48336 '1.6.64' 'init63'
48335 '1.6.7' 'init6'
48334 '1.6.75' 'init74'
48333 '1.6.46' 'init45'
48332 '1.6.5248' 'betaxtrade'
48331 '1.6.45' 'init44'
48330 '1.6.50' 'init49'
48329 '1.6.49' 'init48'
48328 '1.6.31' 'init30'
48327 '1.6.40' 'init39'
48326 '1.6.2' 'init1'
48325 '1.6.62' 'init61'
48324 '1.6.3' 'init2'
48323 '1.6.27' 'init26'
48322 '1.6.28' 'init27'
48321 '1.6.21' 'init20'
48320 '1.6.92' 'init91'
48319 '1.6.23' 'init22'
48318 '1.6.47' 'init46'
48317 '1.6.1538' 'delegate-clayop'
48316 '1.6.37' 'init36'
48315 '1.6.55' 'init54'
48314 '1.6.84' 'init83'
48313 '1.6.74' 'init73'
48312 '1.6.68' 'init67'
48311 '1.6.22' 'init21'
48310 '1.6.78' 'init77'
48309 '1.6.83' 'init82'
48308 '1.6.42' 'init41'
48307 '1.6.38' 'init37'
48306 '1.6.53' 'init52'
48305 '1.6.81' 'init80'
48304 '1.6.11' 'init10'
48303 '1.6.89' 'init88'
48302 '1.6.69' 'init68'
48301 '1.6.56' 'init55'
48300 '1.6.35' 'init34'
48299 '1.6.82' 'init81'
48298 '1.6.5247' 'in.abit'
48297 '1.6.72' 'init71'
48296 '1.6.39' 'init38'
48295 '1.6.36' 'init35'
48294 '1.6.16' 'init15'
48293 '1.6.29' 'init28'
48292 '1.6.67' 'init66'
48291 '1.6.12' 'init11'
48290 '1.6.41' 'init40'
48289 '1.6.34' 'init33'
48288 '1.6.8' 'init7'
48287 '1.6.66' 'init65'
48286 '1.6.4' 'init3'
48285 '1.6.70' 'init69'
48284 '1.6.64' 'init63'
48283 '1.6.90' 'init89'
48282 '1.6.17' 'init16'
48281 '1.6.14' 'init13'
48280 '1.6.20' 'init19'
48279 '1.6.32' 'init31'
48278 '1.6.10' 'init9'
48277 '1.6.79' 'init78'
48276 '1.6.45' 'init44'
48275 '1.6.80' 'init79'
48274 '1.6.1531' 'delegate-1.lafona'
48273 '1.6.26' 'init25'
48272 '1.6.54' 'init53'
48271 '1.6.18' 'init17'
48270 '1.6.9' 'init8'
48269 '1.6.85' 'init84'
48268 '1.6.61' 'init60'
48267 '1.6.7' 'init6'
48266 '1.6.60' 'init59'
48265 '1.6.86' 'init85'
48264 '1.6.46' 'init45'
48263 '1.6.25' 'init24'
48262 '1.6.33' 'init32'
48261 '1.6.58' 'init57'
48260 '1.6.5248' 'betaxtrade'
48259 '1.6.43' 'init42'
48258 '1.6.59' 'init58'
48257 '1.6.1' 'init0'
48256 '1.6.19' 'init18'
48255 '1.6.48' 'init47'
48254 '1.6.91' 'init90'
48253 '1.6.44' 'init43'
48252 '1.6.73' 'init72'
48251 '1.6.65' 'init64'
48250 '1.6.87' 'init86'
48249 '1.6.88' 'init87'
48248 '1.6.24' 'init23'
48247 '1.6.6' 'init5'
48246 '1.6.30' 'init29'
48245 '1.6.52' 'init51'
48244 '1.6.51' 'init50'
48243 '1.6.4232' 'spartako'
48242 '1.6.57' 'init56'
48241 '1.6.76' 'init75'
48240 '1.6.1527' 'dele-puppy'
would there be any benefit spamming from multiple accounts? I have some time tonight and could join the party, I just don't have any core. I have my delegate on a vps and would be spamming from my home computer. Also puppies, are you using the flood network command or the script you wrote?I am spamming with a keyboard emulator called autokey. It allows you to set up phrases as hotkeys, I set <up><enter> a whole bunch of times, and set it to a hot key. I set up the wallet transfer once to make sure it is okay, and then spam the hotkey. Every time you hit the key it will queue up another iteration of the phrase. I spam it for 30 seconds or so, and it will go for hours.
Reminder thatcan u write a little tut on how to start a witness?
apt-add-repository ppa:showard314/ppa
has graphene Ubuntu packages built daily off of master in case you're a lurker that wants to jump in and give it a shot
i'm stuck when i try tu run ./witness_node with the latest json...no idea what i have to do :D
wget https://github.com/cryptonomex/graphene/releases/download/test2b/aug-31-testnet-genesis.json
mkdir test
nano test/config.ini
Then copy the code below into the terminal window and hit control-x and then y and then enter.# Endpoint for P2P node to listen on
p2p-endpoint = 0.0.0.0:1776
# P2P nodes to connect to on startup (may specify multiple times)
seed-node = 104.236.118.105:1776
# Pairs of [BLOCK_NUM,BLOCK_ID] that should be enforced as checkpoints.
# checkpoint =
# Endpoint for websocket RPC to listen on
rpc-endpoint = 127.0.0.1:8090
# Endpoint for TLS websocket RPC to listen on
# rpc-tls-endpoint =
# The TLS certificate file for this server
# server-pem =
# Password for this certificate
# server-pem-password =
# File to read Genesis State from
genesis-json = aug-31-testnet-genesis.json
# JSON file specifying API permissions
# api-access =
# Enable block production, even if the chain is stale.
enable-stale-production = false
# Percent of witnesses (0-99) that must be participating in order to produce blocks
required-participation = false
# Allow block production, even if the last block was produced by the same witness.
allow-consecutive = false
# ID of witness controlled by this node (e.g. "1.6.5", quotes are required, may specify multiple times)
# witness-id =
# Tuple of [PublicKey, WIF private key] (may specify multiple times)
private-key = ["GPH6MRyAjQq8ud7hVNYcfnVPJqcVpscN5So8BhtHuGYqET5GDW5CV","5KQwrPbwdL6PhXujxW37FSSQZ1JiwsST4cqQzDeyXtP79zkvFD3"]
# Account ID to track history for (may specify multiple times)
# track-account =
# Track market history by grouping orders into buckets of equal size measured in seconds specified as a JSON array of numbers
bucket-size = [15,60,300,3600,86400]
# How far back in time to track history for each bucket size, measured in the number of buckets (default: 1000)
history-per-size = 1000
# declare an appender named "stderr" that writes messages to the console
[log.console_appender.stderr]
stream=std_error
# declare an appender named "p2p" that writes messages to p2p.log
[log.file_appender.p2p]
filename=logs/p2p/p2p.log
# filename can be absolute or relative to this config file
# route any messages logged to the default logger to the "stderr" logger we
# declared above, if they are info level are higher
[logger.default]
level=info
appenders=stderr
# route messages sent to the "p2p" logger to the p2p appender declared above
[logger.p2p]
level=debug
appenders=p2p
after that type witness_node -d test
add a ./ to the beginning of that if you have built the source code yourself and haven't used maqs ppa. Although if thats the case you will need to be running this is the same directory as your witness_node binary.would there be any benefit spamming from multiple accounts? I have some time tonight and could join the party, I just don't have any core. I have my delegate on a vps and would be spamming from my home computer. Also puppies, are you using the flood network command or the script you wrote?I am spamming with a keyboard emulator called autokey. It allows you to set up phrases as hotkeys, I set <up><enter> a whole bunch of times, and set it to a hot key. I set up the wallet transfer once to make sure it is okay, and then spam the hotkey. Every time you hit the key it will queue up another iteration of the phrase. I spam it for 30 seconds or so, and it will go for hours.
I am sure there is a much more elegant way to handle this, but I didn't want to take the time to figure it out. I knew this would work, and would be easy to set up.
sudo apt-get install autokey-gtk
get_object 2.6.63354
[{
"id": "2.6.63354",
"owner": "1.2.63354",
"most_recent_op": "2.9.104924",
"total_core_in_orders": 0,
"lifetime_fees_paid": "88442604445",
"pending_fees": 0,
"pending_vested_fees": 0
}
]
and then when I look at get_object 1.13.173
[{
"id": "1.13.173",
"owner": "1.2.63354",
"balance": {
"amount": "69011678875",
"asset_id": "1.3.0"
},
"policy": [
1,{
"vesting_seconds": 31536000,
"start_claim": "1970-01-01T00:00:00",
"coin_seconds_earned": "2176352305002000000",
"coin_seconds_earned_last_update": "2015-09-03T23:15:00"
Do we really have to wait a year for the 80% refund on fees?
on ubuntuCode: [Select]sudo apt-get install autokey-gtk
Go to new and then select phrase. Name your phrase. In the window type in <up><enter><up><enter><up><enter> a bunch. I just ctrl-c, a few and then spam ctrl-v. Then save your phrase. set a hot key down below, and you are good to go.on ubuntuCode: [Select]sudo apt-get install autokey-gtk
Thanks. Could you also share your script code?
98.0572% 48000 of 48951
witness_node: /home/user/src/graphene/libraries/fc/include/fc/optional.hpp:192: T& fc::optional<T>::operator*() [with T = graphene::chain::signed_block]: Assertion `_valid' failed.
Program received signal SIGABRT, Aborted.
0x00007ffff6c01cc9 in __GI_raise (sig=sig@entry=6)
at ../nptl/sysdeps/unix/sysv/linux/raise.c:56
56 ../nptl/sysdeps/unix/sysv/linux/raise.c: No such file or directory.
(gdb) bt
#0 0x00007ffff6c01cc9 in __GI_raise (sig=sig@entry=6)
at ../nptl/sysdeps/unix/sysv/linux/raise.c:56
#1 0x00007ffff6c050d8 in __GI_abort () at abort.c:89
#2 0x00007ffff6bfab86 in __assert_fail_base (
fmt=0x7ffff6d4b830 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n",
assertion=assertion@entry=0x29b610d "_valid",
file=file@entry=0x29b60d0 "/home/user/src/graphene/libraries/fc/include/fc/optional.hpp", line=line@entry=192,
function=function@entry=0x29bb260 <fc::optional<graphene::chain::signed_block>::operator*()::__PRETTY_FUNCTION__> "T& fc::optional<T>::operator*() [with T = graphene::chain::signed_block]") at assert.c:92
#3 0x00007ffff6bfac32 in __GI___assert_fail (assertion=0x29b610d "_valid",
file=0x29b60d0 "/home/user/src/graphene/libraries/fc/include/fc/optional.hpp", line=192,
function=0x29bb260 <fc::optional<graphene::chain::signed_block>::operator*()::__PRETTY_FUNCTION__> "T& fc::optional<T>::operator*() [with T = graphene::chain::signed_block]") at assert.c:101
#4 0x0000000001f07028 in fc::optional<graphene::chain::signed_block>::operator*() ()
#5 0x0000000002268967 in graphene::chain::database::reindex(fc::path, graphene::chain::genesis_state_type const&) ()
#6 0x0000000001ef7a62 in graphene::app::detail::application_impl::startup() ()
#7 0x0000000001eebfa2 in graphene::app::application::startup() ()
---Type <return> to continue, or q <return> to quit---
#8 0x0000000001ec268f in main ()
Can we have a spam party after mumble session? 8)
Can we have a spam party after mumble session? 8)
I think that would be great. I would love to see how high we could get our TPS
I am in.
witness id 1.6.234,
accout id airdrop
I need more CORE to test,thanks all.
I am in.
witness id 1.6.234,
accout id airdrop
I need more CORE to test,thanks all.
Sent. Can you join tomorrow's spam party?
I am in.
witness id 1.6.234,
accout id airdrop
I need more CORE to test,thanks all.
Sent. Can you join tomorrow's spam party?
Thanks, of course.But how to join?
Go to new and then select phrase. Name your phrase. In the window type in <up><enter><up><enter><up><enter> a bunch. I just ctrl-c, a few and then spam ctrl-v. Then save your phrase. set a hot key down below, and you are good to go.on ubuntuCode: [Select]sudo apt-get install autokey-gtk
Thanks. Could you also share your script code?
If you get a little bit too much spam going, click the A in your system tray and unclick enable expansions. (I think that will work. I have had to reboot a couple of times.)
import time
for i in range(3000):
keyboard.send_keys("<up>" "<enter>")
time.sleep(0.05)
{
"id": "1.6.1625",
...
"total_votes": 0,
"url": ""
}
get_witness airdrop
{
"id": "1.6.234",
"witness_account": "1.2.1854",
"last_aslot": 0,
"signing_key": "GPH6XXXXXXXXXXXXXXXXXXXXXXXX",
"vote_id": "1:233",
"total_votes": 204540615,
"url": ""
}
Code: [Select]get_witness airdrop
{
"id": "1.6.234",
"witness_account": "1.2.1854",
"last_aslot": 0,
"signing_key": "GPH6XXXXXXXXXXXXXXXXXXXXXXXX",
"vote_id": "1:233",
"total_votes": 204540615,
"url": ""
}
1.6.234 is not producing block and no "pay_vb"
1.6.234 is active.
Because 1.6.234 was a delegate in genesis block ?
./witness_node --rpc-endpoint "127.0.0.1:8090" --genesis-json aug-31-testnet-genesis.json -s 104.236.118.105:1776 --witness-id '"1.6.234"' --private-key '["GP...................................oy", "5KW.......................P8a"]'
Code: [Select]get_witness airdrop
{
"id": "1.6.234",
"witness_account": "1.2.1854",
"last_aslot": 0,
"signing_key": "GPH6XXXXXXXXXXXXXXXXXXXXXXXX",
"vote_id": "1:233",
"total_votes": 204540615,
"url": ""
}
1.6.234 is not producing block and no "pay_vb"
1.6.234 is active.
Because 1.6.234 was a delegate in genesis block ?
You have to launch the witness node with the proper commands. You can either put it all in the command line like,Code: [Select]./witness_node --rpc-endpoint "127.0.0.1:8090" --genesis-json aug-31-testnet-genesis.json -s 104.236.118.105:1776 --witness-id '"1.6.234"' --private-key '["GP...................................oy", "5KW.......................P8a"]'
or you can add these parameters to the config.ini.
I hope thats what you were asking.
wget https://github.com/cryptonomex/graphene/releases/download/test2b/aug-31-testnet-genesis.json]
I found finally the right chain after test 1 ;)I am joining in the testing.
ID: 1.6.624
Witness Account: 1.2.8112
Account: bitcube
Can someone send me some CORE and vote me in?
855000ms th_a witness.cpp:220 block_production_loo ] Not producing block because it isn't my turn
855000ms th_a witness.cpp:187 schedule_production_ ] now.time_since_epoch().count(): 1441361655009392 next_second: 2015-09-04T10:14:16
856001ms th_a witness.cpp:220 block_production_loo ] Not producing block because it isn't my turn
856001ms th_a witness.cpp:187 schedule_production_ ] now.time_since_epoch().count(): 1441361656010598 next_second: 2015-09-04T10:14:17
857001ms th_a witness.cpp:220 block_production_loo ] Not producing block because it isn't my turn
...
how to join? is there any guide?
I keep getting this but not producing blocks.Code: [Select]855000ms th_a witness.cpp:220 block_production_loo ] Not producing block because it isn't my turn
855000ms th_a witness.cpp:187 schedule_production_ ] now.time_since_epoch().count(): 1441361655009392 next_second: 2015-09-04T10:14:16
856001ms th_a witness.cpp:220 block_production_loo ] Not producing block because it isn't my turn
856001ms th_a witness.cpp:187 schedule_production_ ] now.time_since_epoch().count(): 1441361656010598 next_second: 2015-09-04T10:14:17
857001ms th_a witness.cpp:220 block_production_loo ] Not producing block because it isn't my turn
...
What does it mean by "Not producing block because it isn't my turn"?
witness_node -d testNet2 --resync-blockchain
andrm testNet2/blockchain/ testNet2/p2p/ testNet2/logs/ object_database/ -fr
witness_node -d testNet2
./witness_node --rpc-endpoint "192.168.1.11:8090" -d test_net_2 -s "104.236.118.105:1776" --genesis-json aug-31-testnet-genesis.json
I keep getting this but not producing blocks.Code: [Select]855000ms th_a witness.cpp:220 block_production_loo ] Not producing block because it isn't my turn
855000ms th_a witness.cpp:187 schedule_production_ ] now.time_since_epoch().count(): 1441361655009392 next_second: 2015-09-04T10:14:16
856001ms th_a witness.cpp:220 block_production_loo ] Not producing block because it isn't my turn
856001ms th_a witness.cpp:187 schedule_production_ ] now.time_since_epoch().count(): 1441361656010598 next_second: 2015-09-04T10:14:17
857001ms th_a witness.cpp:220 block_production_loo ] Not producing block because it isn't my turn
...
What does it mean by "Not producing block because it isn't my turn"?
get_witness riverhead
{
"id": "1.6.3968",
"witness_account": "1.2.67159",
"last_aslot": 58732,
"signing_key": "GPH6BJYGHftujnbttFFKX6YacnvsMd4sbJrbucg682GiU4vmXHTik",
"pay_vb": "1.13.178",
"vote_id": "1:3967",
"total_votes": 1514765,
"url": ""
}
unlocked >>> get_object "1.13.178"
get_object "1.13.178"
[{
"id": "1.13.178",
"owner": "1.2.67159",
"balance": {
"amount": 96000000,
"asset_id": "1.3.0"
},
"policy": [
1,{
"vesting_seconds": 86400,
"start_claim": "1970-01-01T00:00:00",
"coin_seconds_earned": "2534355000000",
"coin_seconds_earned_last_update": "2015-09-04T10:32:25"
}
]
}
]
how to join? is there any guide?
Check out this guide : https://github.com/cryptonomex/graphene/wiki/Howto-become-an-active-witness-in-BitShares-2.0
I could use some CORE for spam if possible. -> delegate.xeldal
and some votes for 1.6.1625 also, i'mguessing "total_votes" shouldn't be 0Code: [Select]{
"id": "1.6.1625",
...
"total_votes": 0,
"url": ""
}
you have to run the cli_wallet(!!!) with
-H 127.0.0.1:8092
and unlock your wallet.
cli_wallet -w test2b --chain-id ecbde738ba0b319cb4d266e613b200d010da8b37313c20aec03f9c8e2d9b35e3 -H 127.0.0.1:8092
Logging RPC to file: logs/rpc/rpc.log
2022756ms th_a main.cpp:111 main ] key_to_wif( committee_private_key ): 5KCBDTcyDqzsqehcb52tW5nU6pXife6V2rX9Yf7c3saYSzbDZ5W
2022757ms th_a main.cpp:115 main ] nathan_pub_key: GPH6MRyAjQq8ud7hVNYcfnVPJqcVpscN5So8BhtHuGYqET5GDW5CV
2022757ms th_a main.cpp:116 main ] key_to_wif( nathan_private_key ): 5KQwrPbwdL6PhXujxW37FSSQZ1JiwsST4cqQzDeyXtP79zkvFD3
2022760ms th_a main.cpp:163 main ] wdata.ws_server: ws://localhost:8090
0 exception: unspecified
Underlying Transport Error
{"message":"Underlying Transport Error"}
asio websocket.cpp:431 operator()
{"uri":"ws://localhost:8090"}
th_a websocket.cpp:616 connect
./cli_wallet -H 127.0.0.1:8092
Logging RPC to file: logs/rpc/rpc.log
2052250ms th_a main.cpp:111 main ] key_to_wif( committee_private_key ): 5KCBDTcyDqzsqehcb52tW5nU6pXife6V2rX9Yf7c3saYSzbDZ5W
2052250ms th_a main.cpp:115 main ] nathan_pub_key: GPH6MRyAjQq8ud7hVNYcfnVPJqcVpscN5So8BhtHuGYqET5GDW5CV
2052250ms th_a main.cpp:116 main ] key_to_wif( nathan_private_key ): 5KQwrPbwdL6PhXujxW37FSSQZ1JiwsST4cqQzDeyXtP79zkvFD3
Starting a new wallet with chain ID 6d2141a7a5577221d3681a4b6296e330f77a4668ddcd78fbbe4fbe966e54bccc (from egenesis)
2052251ms th_a main.cpp:163 main ] wdata.ws_server: ws://localhost:8090
0 exception: unspecified
Underlying Transport Error
{"message":"Underlying Transport Error"}
asio websocket.cpp:431 operator()
{"uri":"ws://localhost:8090"}
th_a websocket.cpp:616 connect
Download this Aug-14 snapshot of BitShares:
https://drive.google.com/open?id=0B_GVo0GoC_v_S3lPOWlUbFJFWTQ
If you have a mac, download the draft version of BitShares 0.9.2 which has a new api call
https://github.com/bitshares/bitshares/releases/tag/untagged-4166986045ff28284dc4
The pay_vb shows the vesting balance object ID. If that is accruing funds you're producing blocks.
info
{
"head_block_num": 56624,
"head_block_id": "0000dd308073d87ccb3b0d739f07ebfb6842e95c",
"head_block_age": "0 second old",
"next_maintenance_time": "4 minutes in the future",
"chain_id": "ecbde738ba0b319cb4d266e613b200d010da8b37313c20aec03f9c8e2d9b35e3",
,,,
get_witness bitcube
{
"id": "1.6.624",
"witness_account": "1.2.8112",
"last_aslot": 0,
"signing_key": "GPxxxxxxaaaaxxxx",
"vote_id": "1:623",
"total_votes": "64885994140",
"url": ""
}
When you first start your witness node does it give you any errors (in blue) about the pub/priv key pair you specified for your witness?
Your slots being 0 probably means you aren't actually signing blocks however you have votes so your witness is good to go.
When you first start your witness node does it give you any errors (in blue) about the pub/priv key pair you specified for your witness?
Your slots being 0 probably means you aren't actually signing blocks however you have votes so your witness is good to go.
I think I found the answer. It is not voted in yet. Please help vote it in.
You have way more votes than me - not sure why you don't show up in the list. You still don't have an assigned slot either. Are you sure your witness_node recognizes your keys?
get_witness bitcube
{
"id": "1.6.624",
"witness_account": "1.2.8112",
"last_aslot": 0,
"signing_key": "GPxxxx-PUBLICKEY",
"vote_id": "1:623",
"total_votes": "64884389452",
"url": ""
}
dump_private_keys
[[
"GPxxxx-PUBLICKEY",
"5Jxxx-PRIVATEKEY"
],[
"GPxxxx-PUBLICKEY2",
"5Jxxx-PRIVATEKEY2"
]
]
# ID of witness controlled by this node (e.g. "1.6.5", quotes are required, may specify multiple times)
witness-id = "1.6.624"
# Tuple of [PublicKey, WIF private key] (may specify multiple times)
private-key = ["GPxxxx-PUBLICKEY","5Jxxx-PRIVATEKEY"]
57981ms th_a witness.cpp:84 plugin_initialize ] witness plugin: plugin_initialize() begin
57981ms th_a witness.cpp:94 plugin_initialize ] key_id_to_wif_pair: ["GPxxxx-PUBLICKEY","5Jxxx-PRIVATEKEY"]
57982ms th_a witness.cpp:112 plugin_initialize ] witness plugin: plugin_initialize() end
58201ms th_a application.cpp:228 operator() ] Initializing database...
get_witness bitcube
{
"id": "1.6.624",
"witness_account": "1.2.8112",
"last_aslot": 0,
"signing_key": "GPH7qbi1...y..sn",
"vote_id": "1:623",
"total_votes": "64885994140",
"url": ""
}
What is your Chain ID? ecbde738ba0b319cb4d266e613b200d010da8b37313c20aec03f9c8e2d9b35e3
I show a different vote count for you:Code: [Select]get_witness bitcube
{
"id": "1.6.624",
"witness_account": "1.2.8112",
"last_aslot": 0,
"signing_key": "GPH7qbi1...y..sn",
"vote_id": "1:623",
"total_votes": "64885994140",
"url": ""
}
{
"head_block_num": 56624,
"head_block_id": "0000dd308073d87ccb3b0d739f07ebfb6842e95c",
"head_block_age": "0 second old",
"next_maintenance_time": "4 minutes in the future",
"chain_id": "ecbde738ba0b319cb4d266e613b200d010da8b37313c20aec03f9c8e2d9b35e3",
,,,
"total_votes": "64885994140",
you have to run the cli_wallet(!!!) with
-H 127.0.0.1:8092
and unlock your wallet.
My wallet will not run with this flag. Is there anything else needed? My two attempts:Code: [Select]cli_wallet -w test2b --chain-id ecbde738ba0b319cb4d266e613b200d010da8b37313c20aec03f9c8e2d9b35e3 -H 127.0.0.1:8092
Logging RPC to file: logs/rpc/rpc.log
2022756ms th_a main.cpp:111 main ] key_to_wif( committee_private_key ): 5KCBDTcyDqzsqehcb52tW5nU6pXife6V2rX9Yf7c3saYSzbDZ5W
2022757ms th_a main.cpp:115 main ] nathan_pub_key: GPH6MRyAjQq8ud7hVNYcfnVPJqcVpscN5So8BhtHuGYqET5GDW5CV
2022757ms th_a main.cpp:116 main ] key_to_wif( nathan_private_key ): 5KQwrPbwdL6PhXujxW37FSSQZ1JiwsST4cqQzDeyXtP79zkvFD3
2022760ms th_a main.cpp:163 main ] wdata.ws_server: ws://localhost:8090
0 exception: unspecified
Underlying Transport Error
{"message":"Underlying Transport Error"}
asio websocket.cpp:431 operator()
{"uri":"ws://localhost:8090"}
th_a websocket.cpp:616 connectCode: [Select]./cli_wallet -H 127.0.0.1:8092
Logging RPC to file: logs/rpc/rpc.log
2052250ms th_a main.cpp:111 main ] key_to_wif( committee_private_key ): 5KCBDTcyDqzsqehcb52tW5nU6pXife6V2rX9Yf7c3saYSzbDZ5W
2052250ms th_a main.cpp:115 main ] nathan_pub_key: GPH6MRyAjQq8ud7hVNYcfnVPJqcVpscN5So8BhtHuGYqET5GDW5CV
2052250ms th_a main.cpp:116 main ] key_to_wif( nathan_private_key ): 5KQwrPbwdL6PhXujxW37FSSQZ1JiwsST4cqQzDeyXtP79zkvFD3
Starting a new wallet with chain ID 6d2141a7a5577221d3681a4b6296e330f77a4668ddcd78fbbe4fbe966e54bccc (from egenesis)
2052251ms th_a main.cpp:163 main ] wdata.ws_server: ws://localhost:8090
0 exception: unspecified
Underlying Transport Error
{"message":"Underlying Transport Error"}
asio websocket.cpp:431 operator()
{"uri":"ws://localhost:8090"}
th_a websocket.cpp:616 connect
The only other thing I can think of is in the witness config.ini file there is a parameter called enable-stale-production which needs to be false. I think that's the default now so that's probably not the issue.
Sorry for that .. I removed it from the wiki
The only other thing I can think of is in the witness config.ini file there is a parameter called enable-stale-production which needs to be false. I think that's the default now so that's probably not the issue.
I am using 'enable-stale-production' as described in xeroc's guide. I am going to try without it.
I am joining in the testing.
ID: 1.6.624
Witness Account: 1.2.8112
Account: bitcube
Can someone send me some CORE and vote me in?
how to join? is there any guide?
Maybe I made myself a little unclear .. you need to run a witness_node and a cli_node ..
The setup would look like:
Witness (port 8090) <---> cli_wallet (port:8092) <-----> python
The only other thing I can think of is in the witness config.ini file there is a parameter called enable-stale-production which needs to be false. I think that's the default now so that's probably not the issue.
The brave among you can try to interface with the cli_wallet with
https://github.com/xeroc/python-graphenelib
and run the examples/flood.py script.
you have to run the cli_wallet(!!!) with
-H 127.0.0.1:8092
and unlock your wallet.
Again: Do not try to interface with the witness_node, but instead use the cli_wallet and make sure to have it unlocked!!
screen
./cli_wallet -w test_wallet --chain-id ecbde738ba0b319cb4d266e613b200d010da8b37313c20aec03f9c8e2d9b35e3 -H 127.0.0.1:8092
Ctr A Ctr D
sudo apt-get install python3-setuptools
sudo easy_install3 pip
sudo pip3 install autobahn
sudo pip3 install requests
cd ~
git clone https://github.com/xeroc/python-graphenelib.git
cd python*
python3 setup.py install --user
cd examples
nano flood.py
edit client.transfer("putyouruserhere
exit save
python3 flood.py
@betax: nice summary :)
150520ms th_a application.cpp:500 get_blockchain_synop ] reference_point: 0000000000000000000000000000000000000000 number_of_blocks_after_reference_point: 0 result: ["000065b83e3d65a6cb6a68899b097d74dec01f05","0000a5b8499b89f1c7b0673b3bdf7fa92adb14d3","0000c5b8f361ee90850ddf75802f1672ff8c86a0","0000d5b85570d0422e4ad4c1ad13f361c6f464ce","0000ddb8028dc5146068837071d078c6ab12ec1f","0000e1b8ee015c5af5454b6749c8780fb21b626a","0000e3b84b50a30b086f3dfe10c14c57f87b0651","0000e4b8cabfb6ce6acbfa7dcff0ed42a69e8c23","0000e538ce0a9fdadf1b63e1889a3fc56e6fb5f9","0000e578065ea22796197c18c49653d7c057e7ae","0000e598f0097652f2038a60a4aa50c8c3d03a44","0000e5a8cedeb400c4c39380218087f0723e36d0","0000e5b065697016b88d9b1496b0e7701c5ecef4","0000e5b41632696f926b0ae2b02471e20a4711c0","0000e5b65e16b5dd65f2bfb2594ed105c440917a","0000e5b7915bd40faf619c8033500917be4a46f8","0000e5b809c38a7198be347bee54fbc90ba4c2bf"]
150538ms th_a application.cpp:391 handle_transaction ] Got transaction from network
150618ms th_a application.cpp:391 handle_transaction ] Got transaction from network
150672ms th_a application.cpp:391 handle_transaction ] Got transaction from network
witness_node: /home/calabiyau/graphene/libraries/fc/src/thread/thread_d.hpp:370: bool fc::thread_d::start_next_fiber(bool): Assertion `std::current_exception() == std::exception_ptr()' failed.
Aborted (core dumped)
1058066ms th_a wallet.cpp:1574 sign_transaction ] Caught exception while broadcasting transaction with id 17fb6ef2ebb8dbbe3c8ff81416f1398889c152c6
Running xeroc's spam food.py my client (running on a different computer than the witness):Code: [Select]1058066ms th_a wallet.cpp:1574 sign_transaction ] Caught exception while broadcasting transaction with id 17fb6ef2ebb8dbbe3c8ff81416f1398889c152c6
Running xeroc's spam food.py my client (running on a different computer than the witness):Code: [Select]1058066ms th_a wallet.cpp:1574 sign_transaction ] Caught exception while broadcasting transaction with id 17fb6ef2ebb8dbbe3c8ff81416f1398889c152c6
unlocked >>> list_account_balances betaxtrade
list_account_balances betaxtrade
403018.82583 CORE
unlocked >>> list_account_balances betaxtrade
list_account_balances betaxtrade
403018.82583 CORE
unlocked >>> list_account_balances betaxtrade
list_account_balances betaxtrade
403018.82583 CORE
unlocked >>> list_account_balances betaxtrade
list_account_balances betaxtrade
402391.87263 CORE
unlocked >>> list_account_balances betaxtrade
list_account_balances betaxtrade
402329.17731 CORE
unlocked >>> list_account_balances betaxtrade
list_account_balances betaxtrade
404711.59947 CORE
unlocked >>> list_account_balances betaxtrade
list_account_balances betaxtrade
404460.81819 CORE
unlocked >>> list_account_balances betaxtrade
list_account_balances betaxtrade
404439.91975 CORE
unlocked >>> list_account_balances betaxtrade
list_account_balances betaxtrade
404210.03691 CORE
unlocked >>> list_account_balances betaxtrade
list_account_balances betaxtrade
404210.03691 CORE
unlocked >>> list_account_balances betaxtrade
list_account_balances betaxtrade
404210.03691 CORE
unlocked >>> list_account_balances betaxtrade
list_account_balances betaxtrade
404419.02131 CORE
Witness ID: 1.6.1063
Witness Acc: 1.2.14540
Some votes and CORE for "calabiyau" would help to join the party.......
Witness ID: 1.6.1063
Witness Acc: 1.2.14540
Some votes and CORE for "calabiyau" would help to join the party.......
ill flood you some and see if they arrive
Witness ID: 1.6.1063
Witness Acc: 1.2.14540
Some votes and CORE for "calabiyau" would help to join the party.......
ill flood you some and see if they arrive
Flood some to riverhead too :D. I will flood them back.
Witness ID: 1.6.1063
Witness Acc: 1.2.14540
Some votes and CORE for "calabiyau" would help to join the party.......
ill flood you some and see if they arrive
Insufficient Balance: calabiyau's balance of 0.02025 CORE is less than required 20.05761 CORE\n
Still not able to vote:Sending you some now.Code: [Select]Insufficient Balance: calabiyau's balance of 0.02025 CORE is less than required 20.05761 CORE\n
CORE seems to be already a scarce resource :) - WORKING FOR CORE
Still not able to vote:Code: [Select]Insufficient Balance: calabiyau's balance of 0.02025 CORE is less than required 20.05761 CORE\n
CORE seems to be already a scarce resource :) - WORKING FOR CORE
"head_block_num": 59289,
"head_block_id": "0000e799c87dc7846ae96dc6f3a24dcf58f663a2",
"head_block_age": "2 hours old",
"next_maintenance_time": "2 hours ago",
"chain_id": "ecbde738ba0b319cb4d266e613b200d010da8b37313c20aec03f9c8e2d9b35e3",
I don't have any more... :( betaxtrade please
2015-09-04T18:09:35 Transfer 0.05000 CORE from calabiyau to betaxtrade (Fee: 20 CORE)
2015-09-04T18:09:35 Transfer 0.05000 CORE from calabiyau to betaxtrade (Fee: 20 CORE)
2015-09-04T18:09:35 Transfer 0.05000 CORE from calabiyau to betaxtrade (Fee: 20 CORE)
2015-09-04T18:09:35 Transfer 0.05000 CORE from calabiyau to betaxtrade (Fee: 20 CORE)
2015-09-04T18:09:35 Transfer 0.05000 CORE from calabiyau to betaxtrade (Fee: 20 CORE)
2015-09-04T18:09:35 Transfer 0.05000 CORE from calabiyau to betaxtrade (Fee: 20 CORE)
2015-09-04T18:09:35 Transfer 0.05000 CORE from calabiyau to betaxtrade (Fee: 20 CORE)
2015-09-04T18:09:35 Transfer 0.05000 CORE from calabiyau to betaxtrade (Fee: 20 CORE)
2015-09-04T18:09:35 Transfer 0.05000 CORE from calabiyau to betaxtrade (Fee: 20 CORE)
2015-09-04T18:09:35 Transfer 0.05000 CORE from calabiyau to betaxtrade (Fee: 20 CORE)
2015-09-04T18:02:45 Transfer 1 CORE from puppies to calabiyau (Fee: 20 CORE)
2015-09-04T18:01:55 Transfer 0.05000 CORE from calabiyau to betaxtrade (Fee: 20 CORE)
2015-09-04T18:01:45 Transfer 1 CORE from puppies to calabiyau (Fee: 20 CORE)
2015-09-04T18:01:20 Transfer 0.05000 CORE from calabiyau to betaxtrade (Fee: 20 CORE)
2015-09-04T18:00:50 Transfer 1 CORE from puppies to calabiyau (Fee: 20 CORE)
2015-09-04T17:59:40 Transfer 0.05000 CORE from calabiyau to betaxtrade (Fee: 20 CORE)
2015-09-04T17:59:00 Transfer 1 CORE from puppies to calabiyau (Fee: 20 CORE)
2015-09-04T17:58:45 Transfer 0.05000 CORE from calabiyau to betaxtrade (Fee: 20 CORE)
how to set witness-id? I set the witness-id = "1.6.5" in config.ini, is it ok?Looks like you're on a minor fork.
I get the follow info:
379000ms th_a witness.cpp:179 block_production_loo ] Not producing block because it isn't my turn
379000ms th_a witness.cpp:146 schedule_production_ ] now.time_since_epoch().count(): 1441375579000817 next_second: 2015-09-04T14:06:20
380000ms th_a witness.cpp:194 block_production_loo ] Not producing block because the last block was generated by the same witness.
This node is probably disconnected from the network so block production has been disabled.
Disable this check with --allow-consecutive option.
380000ms th_a witness.cpp:146 schedule_production_ ] now.time_since_epoch().count(): 1441375580000857 next_second: 2015-09-04T14:06:21
381000ms th_a witness.cpp:191 block_production_loo ] Not producing block because node didn't wake up within 500ms of the slot time.
381000ms th_a witness.cpp:146 schedule_production_ ] now.time_since_epoch().count(): 1441375581000983 next_second: 2015-09-04T14:06:22
382000ms th_a witness.cpp:191 block_production_loo ] Not producing block because node didn't wake up within 500ms of the slot time.
382000ms th_a witness.cpp:146 schedule_production_ ] now.time_since_epoch().count(): 1441375582000927 next_second: 2015-09-04T14:06:23
383000ms th_a witness.cpp:191 block_production_loo ] Not producing block because node didn't wake up within 500ms of the slot time.
383000ms th_a witness.cpp:146 schedule_production_ ] now.time_since_epoch().count(): 1441375583000805 next_second: 2015-09-04T14:06:24
384000ms th_a witness.cpp:191 block_production_loo ] Not producing block because node didn't wake up within 500ms of the slot time.
384000ms th_a witness.cpp:146 schedule_production_ ] now.time_since_epoch().count(): 1441375584000792 next_second: 2015-09-04T14:06:25
385000ms th_a witness.cpp:179 block_production_loo ] Not producing block because it isn't my turn
what's wrong with it?
Polite request for some CORE to continue testing.sent you 20k. I am pretty low as well.
Thanks in advance!
Account name "fox"
Most appreciated! Putting them to good use shortly...Polite request for some CORE to continue testing.sent you 20k. I am pretty low as well.
Thanks in advance!
Account name "fox"
Me too about the sync status. Looks like it's caused by a few (different) reasons. I posted all my related logs to the issues page on Github. You can check your log files.Polite request for some CORE to continue testing.sent you 20k. I am pretty low as well.
Thanks in advance!
Account name "fox"
My home node lost sync three times during testing requiring a resync. My witness node lost sync once as well. Is the reason for these issues already known? If not what would be the best way to help figure this out. It seems to be happening mainly during stress testing.
check now.
So a couple people have said they are sending me CORE via flood scripts or otherwise. I haven't received any since I ran out of my original funds. If it's the case that no one has sent CORE to riverhead that's cool but I just want to make sure it wasn't sent and not received.
My witness node is sync'd and seems to be on the active node. Anyone else having trouble receiving funds?
check now.
So a couple people have said they are sending me CORE via flood scripts or otherwise. I haven't received any since I ran out of my original funds. If it's the case that no one has sent CORE to riverhead that's cool but I just want to make sure it wasn't sent and not received.
My witness node is sync'd and seems to be on the active node. Anyone else having trouble receiving funds?
unlocked >>> get_object 2.6.63354
get_object 2.6.63354
[{
"id": "2.6.63354",
"owner": "1.2.63354",
"most_recent_op": "2.9.164629",
"total_core_in_orders": 0,
"lifetime_fees_paid": "105566827099",
"pending_fees": 0,
"pending_vested_fees": 0
}
]
unlocked >>>
The vast majority will be eaten as fees.Code: [Select]unlocked >>> get_object 2.6.63354
get_object 2.6.63354
[{
"id": "2.6.63354",
"owner": "1.2.63354",
"most_recent_op": "2.9.164629",
"total_core_in_orders": 0,
"lifetime_fees_paid": "105566827099",
"pending_fees": 0,
"pending_vested_fees": 0
}
]
unlocked >>>
how to set witness-id? I set the witness-id = "1.6.5" in config.ini, is it ok?Looks like you're on a minor fork.
I get the follow info:
379000ms th_a witness.cpp:179 block_production_loo ] Not producing block because it isn't my turn
379000ms th_a witness.cpp:146 schedule_production_ ] now.time_since_epoch().count(): 1441375579000817 next_second: 2015-09-04T14:06:20
380000ms th_a witness.cpp:194 block_production_loo ] Not producing block because the last block was generated by the same witness.
This node is probably disconnected from the network so block production has been disabled.
Disable this check with --allow-consecutive option.
380000ms th_a witness.cpp:146 schedule_production_ ] now.time_since_epoch().count(): 1441375580000857 next_second: 2015-09-04T14:06:21
381000ms th_a witness.cpp:191 block_production_loo ] Not producing block because node didn't wake up within 500ms of the slot time.
381000ms th_a witness.cpp:146 schedule_production_ ] now.time_since_epoch().count(): 1441375581000983 next_second: 2015-09-04T14:06:22
382000ms th_a witness.cpp:191 block_production_loo ] Not producing block because node didn't wake up within 500ms of the slot time.
382000ms th_a witness.cpp:146 schedule_production_ ] now.time_since_epoch().count(): 1441375582000927 next_second: 2015-09-04T14:06:23
383000ms th_a witness.cpp:191 block_production_loo ] Not producing block because node didn't wake up within 500ms of the slot time.
383000ms th_a witness.cpp:146 schedule_production_ ] now.time_since_epoch().count(): 1441375583000805 next_second: 2015-09-04T14:06:24
384000ms th_a witness.cpp:191 block_production_loo ] Not producing block because node didn't wake up within 500ms of the slot time.
384000ms th_a witness.cpp:146 schedule_production_ ] now.time_since_epoch().count(): 1441375584000792 next_second: 2015-09-04T14:06:25
385000ms th_a witness.cpp:179 block_production_loo ] Not producing block because it isn't my turn
what's wrong with it?
Try resync, and don't start with parameter '--enable-stale-production'
The only other thing I can think of is in the witness config.ini file there is a parameter called enable-stale-production which needs to be false. I think that's the default now so that's probably not the issue.
I removed 'enable-stale-production' and restarted after a full blockchain resynced. I am still not in the active list. Any idea?
My VPS is dead :(What are the specs of your VPS? I'm interested to learn what is the bottleneck: RAM, disk IOPS, network I/O, CPU.
Thanks to a pile of transactions? :D
My VPS is dead :(What are the specs of your VPS? I'm interested to learn what is the bottleneck: RAM, disk IOPS, network I/O, CPU.
Thanks to a pile of transactions? :D
So a couple people have said they are sending me CORE via flood scripts or otherwise. I haven't received any since I ran out of my original funds. If it's the case that no one has sent CORE to riverhead that's cool but I just want to make sure it wasn't sent and not received.
My witness node is sync'd and seems to be on the active node. Anyone else having trouble receiving funds?
So a couple people have said they are sending me CORE via flood scripts or otherwise. I haven't received any since I ran out of my original funds. If it's the case that no one has sent CORE to riverhead that's cool but I just want to make sure it wasn't sent and not received.
My witness node is sync'd and seems to be on the active node. Anyone else having trouble receiving funds?
I sent (flood) you lots yesterday, did you get any?
So a couple people have said they are sending me CORE via flood scripts or otherwise. I haven't received any since I ran out of my original funds. If it's the case that no one has sent CORE to riverhead that's cool but I just want to make sure it wasn't sent and not received.
My witness node is sync'd and seems to be on the active node. Anyone else having trouble receiving funds?
I sent (flood) you lots yesterday, did you get any?
| guess you did i got lots back, i am again up and running flooding to riverhead, xeldal and calabiyau if anybody wants to a be added in the loop, let me know before i ran out leave for the weekend
2015-09-05T07:03:30 Transfer 0.00001 CORE from betaxtrade to calabiyau -- Memo: memo (Fee: 20.89843 CORE)
2015-09-05T07:03:30 Transfer 0.00001 CORE from betaxtrade to calabiyau -- Memo: memo (Fee: 20.89843 CORE)
2015-09-05T07:03:30 Transfer 0.00001 CORE from betaxtrade to calabiyau -- Memo: memo (Fee: 20.89843 CORE)
2015-09-05T07:03:30 Transfer 0.00001 CORE from betaxtrade to calabiyau -- Memo: memo (Fee: 20.89843 CORE)
2015-09-05T07:03:30 Transfer 0.00001 CORE from betaxtrade to calabiyau -- Memo: memo (Fee: 20.89843 CORE)
2015-09-05T07:03:30 Transfer 0.00001 CORE from betaxtrade to calabiyau -- Memo: memo (Fee: 20.89843 CORE)
2015-09-05T07:03:30 Transfer 0.00001 CORE from betaxtrade to calabiyau -- Memo: memo (Fee: 20.89843 CORE)
2015-09-05T07:03:30 Transfer 0.00001 CORE from betaxtrade to calabiyau -- Memo: memo (Fee: 20.89843 CORE)
2015-09-05T07:03:30 Transfer 0.00001 CORE from betaxtrade to calabiyau -- Memo: memo (Fee: 20.89843 CORE)
2015-09-05T07:03:30 Transfer 0.00001 CORE from betaxtrade to calabiyau -- Memo: memo (Fee: 20.89843 CORE)
2015-09-05T07:03:30 Transfer 0.00001 CORE from betaxtrade to calabiyau -- Memo: memo (Fee: 20.89843 CORE)
2015-09-05T07:03:30 Transfer 0.00001 CORE from betaxtrade to calabiyau -- Memo: memo (Fee: 20.89843 CORE)
2015-09-05T07:03:30 Transfer 0.00001 CORE from betaxtrade to calabiyau -- Memo: memo (Fee: 20.89843 CORE)
2015-09-05T07:03:30 Transfer 0.00001 CORE from betaxtrade to calabiyau -- Memo: memo (Fee: 20.89843 CORE)
2015-09-05T07:03:30 Transfer 0.00001 CORE from betaxtrade to calabiyau -- Memo: memo (Fee: 20.89843 CORE)
So I wait for the > 2 Mio tx to be completed to make 1 tx myself ;)
You should have a lower fee if you don't use a memo. Just use "" instead. At least I think you will have a lower fee.
2015-09-04T18:09:35 Transfer 0.05000 CORE from calabiyau to betaxtrade (Fee: 20 CORE)
2015-09-04T18:09:35 Transfer 0.05000 CORE from calabiyau to betaxtrade (Fee: 20 CORE)
2015-09-04T18:09:35 Transfer 0.05000 CORE from calabiyau to betaxtrade (Fee: 20 CORE)
{}
th_a fork_database.cpp:194 fetch_branch_from
{"first":"00012d7030b8d0cf678b14e3b34dec6158526249","second":"000123cd9d50e68bef1f464076ac87fd23b78bb9"}
th_a fork_database.cpp:225 fetch_branch_from
{"new_block":{"previous":"00012d6f566077aab18d0405083fbc54d8ea8f00","timestamp":"2015-09-05T22:44:35","witness":"1.6.18","transaction_merkle_root":"0000000000000000000000000000000000000000","extensions":[],"witness_signature":"1f18adf004de72f4ef5aa51f8cfe25eac90fd4514684576e3ed225f4f2f4a508e3463f452e5d9d7dca7870819c94d6f4a77cac3f37a24da73198101ba43244af02","transactions":[]}}
th_a db_block.cpp:176 _push_block
witness_node: /home/user/src/graphene/libraries/fc/src/thread/thread_d.hpp:370: bool fc::thread_d::start_next_fiber(bool): Assertion `std::current_exception() == std::exception_ptr()' failed.
Program received signal SIGABRT, Aborted.
0x00007ffff6c01cc9 in __GI_raise (sig=sig@entry=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:56
56 ../nptl/sysdeps/unix/sysv/linux/raise.c: No such file or directory.
(gdb) bt
#0 0x00007ffff6c01cc9 in __GI_raise (sig=sig@entry=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:56
#1 0x00007ffff6c050d8 in __GI_abort () at abort.c:89
#2 0x00007ffff6bfab86 in __assert_fail_base (fmt=0x7ffff6d4b830 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n",
assertion=assertion@entry=0x2a7d2c8 "std::current_exception() == std::exception_ptr()",
file=file@entry=0x2a7d1a8 "/home/user/src/graphene/libraries/fc/src/thread/thread_d.hpp", line=line@entry=370,
function=function@entry=0x2a7df80 <fc::thread_d::start_next_fiber(bool)::__PRETTY_FUNCTION__> "bool fc::thread_d::start_next_fiber(bool)") at assert.c:92
#3 0x00007ffff6bfac32 in __GI___assert_fail (assertion=0x2a7d2c8 "std::current_exception() == std::exception_ptr()",
file=0x2a7d1a8 "/home/user/src/graphene/libraries/fc/src/thread/thread_d.hpp", line=370,
function=0x2a7df80 <fc::thread_d::start_next_fiber(bool)::__PRETTY_FUNCTION__> "bool fc::thread_d::start_next_fiber(bool)") at assert.c:101
#4 0x000000000252efbf in fc::thread_d::start_next_fiber(bool) ()
#5 0x0000000002528c2d in fc::thread::yield(bool) ()
#6 0x000000000252989a in fc::yield() ()
#7 0x000000000253c3ed in fc::spin_yield_lock::lock() ()
#8 0x000000000253ad74 in fc::unique_lock<fc::spin_yield_lock&>::lock() ()
#9 0x000000000253ac8f in fc::unique_lock<fc::spin_yield_lock&>::unique_lock(fc::spin_yield_lock&) ()
#10 0x000000000253aa96 in fc::promise_base::_set_value(void const*) ()
#11 0x000000000253a360 in fc::promise_base::set_exception(std::shared_ptr<fc::exception> const&) ()
#12 0x000000000253b627 in fc::task_base::run_impl() ()
#13 0x000000000253b114 in fc::task_base::run() ()
#14 0x000000000252fb34 in fc::thread_d::run_next_task() ()
#15 0x000000000252ffd8 in fc::thread_d::process_tasks() ()
#16 0x000000000252f64b in fc::thread_d::start_process_tasks(long) ()
#17 0x000000000288b571 in make_fcontext ()
#18 0x0000000000000000 in ?? ()
(gdb)
I am working on a solution for getting knocked out of sync. Syncing issues will be a thing of the past ;)+5%
3080422ms th_a application.cpp:356 handle_block ] Got block #92762 f
rom network
3080422ms th_a application.cpp:378 handle_block ] Error when pushing
block:
10 assert_exception: Assert Exception
item->num < _head->num + MAX_BLOCK_REORDERING:
{}
th_a fork_database.cpp:70 _push_block
{"new_block":{"previous":"00016a59c15917da032053a4df45edfa7171af68","timestamp":"2015-09
-07T00:51:20","witness":"1.6.89","transaction_merkle_root":"00000000000000000000000000000000
00000000","extensions":[],"witness_signature":"1f656d75f66d9dea393c463d259e0abbc2fa5a22bf8b1
3292e49335397a88df9180f36314007fb1738622112d4e24770a07c32e4a6decb691b3286d88d22299858","tran
sactions":[]}}
th_a db_block.cpp:176 _push_block
hey .. .could somebody send my 10k CORE for upgrading my testnet account ? I will pay all back for sure..
My 2.0 account to send to: graphene
hey .. .could somebody send my 10k CORE for upgrading my testnet account ? I will pay all back for sure..
My 2.0 account to send to: graphene
10k CORE sent
FYI: Resyncing 94000 blocks takes about 8 min (3.75 memory, no SSD) without any performance lags.So if 1 second blocks it will take about 8 mins to sync each day or 2 days to sync each year?
FYI: Resyncing 94000 blocks takes about 8 min (3.75 memory, no SSD) without any performance lags.So if 1 second blocks it will take about 8 mins to sync each day or 2 days to sync each year?
Thanks to @clayop I now have an upgraded account I'd like to turn into a witness. Apparently I am 4,000 Core short of being able to create one - anyone be able to spare a few? :) Thx
Account: "e-v"
I have ran out of CORE after the flooding... :(
With fees so high & supply so low, ammunition for stress testing is very limited.
Can we tweak the fees ?
Thanks to @clayop I now have an upgraded account I'd like to turn into a witness. Apparently I am 4,000 Core short of being able to create one - anyone be able to spare a few? :) Thx
Account: "e-v"
yep same on my side ^^
Thanks to @clayop I now have an upgraded account I'd like to turn into a witness. Apparently I am 4,000 Core short of being able to create one - anyone be able to spare a few? :) Thx
Account: "e-v"
yep same on my side ^^
I've sent both graphene and e-v 4000 CORE for account registration.
Thanks to @clayop I now have an upgraded account I'd like to turn into a witness. Apparently I am 4,000 Core short of being able to create one - anyone be able to spare a few? :) Thx
Account: "e-v"
yep same on my side ^^
I've sent both graphene and e-v 4000 CORE for account registration.
FYI: Resyncing 94000 blocks takes about 8 min (3.75 memory, no SSD) without any performance lags.So if 1 second blocks it will take about 8 mins to sync each day or 2 days to sync each year?
I think it will take less because this testnet already have considerable amount of spam transactions
I have ran out of CORE after the flooding... :(
I have 100K CORE for testing to give to 10 people (10K per person)
Tell me your nick and I will send to you at first ten people.
unlocked >>> get_account_history spartako 2
get_account_history spartako 2
2015-09-07T14:41:05 Transfer 10000 CORE from spartako to calabiyau -- could not decrypt memo (Fee: 20.89843 CORE)
2015-09-07T14:39:50 Transfer 10000 CORE from spartako to graphene -- could not decrypt memo (Fee: 20.89843 CORE)
Code: [Select]unlocked >>> get_account_history spartako 2
get_account_history spartako 2
2015-09-07T14:41:05 Transfer 10000 CORE from spartako to calabiyau -- could not decrypt memo (Fee: 20.89843 CORE)
2015-09-07T14:39:50 Transfer 10000 CORE from spartako to graphene -- could not decrypt memo (Fee: 20.89843 CORE)
Is It ok that I am not able to see the memo value?
2015-09-07T14:41:05 Transfer 10000 CORE from spartako to calabiyau -- Memo: testing (Fee: 20.89843 CORE) ]
Code: [Select]2015-09-07T14:41:05 Transfer 10000 CORE from spartako to calabiyau -- Memo: testing (Fee: 20.89843 CORE) ]
That`s how it looks like here - thx
199164ms th_a thread.cpp:95 thread ] name:p2p tid:4535488512
199164ms ntp ntp.cpp:77 request_now ] resolving... ["pool.ntp.org",123]
199199ms ntp ntp.cpp:81 request_now ] sending request to 213.136.0.252:123
199199ms th_a application.cpp:117 reset_p2p_node ] Adding seed node 104.236.118.105:1776
199200ms th_a application.cpp:129 reset_p2p_node ] Configured p2p node to listen on 0.0.0.0:51700
199202ms th_a application.cpp:179 reset_websocket_serv ] Configured websocket rpc to listen on 127.0.0.1:8090
199203ms th_a witness.cpp:117 plugin_startup ] witness plugin: plugin_startup() begin
199203ms th_a witness.cpp:124 plugin_startup ] Launching block production for 1 witnesses.
199203ms th_a witness.cpp:146 schedule_production_ ] now.time_since_epoch().count(): 1441638199203280 next_second: 2015-09-07T15:03:20
199203ms th_a witness.cpp:131 plugin_startup ] witness plugin: plugin_startup() end
199203ms th_a main.cpp:165 main ] Started witness node on a chain with 0 blocks.
199203ms th_a main.cpp:166 main ] Chain ID is 57a6462f35bb7fe448d22b3a8b61dd67663bfdd00b4e4e969f0f5b502813b6c5
199313ms th_a api.cpp:40 database_api ] creating database api 4527827232
199314ms th_a api.cpp:40 database_api ] creating database api 4527828384
200000ms th_a witness.cpp:176 block_production_loo ] Not producing block because production is disabled until we receive a recent block (see: --enable-stale-production)
200000ms th_a witness.cpp:146 schedule_production_ ] now.time_since_epoch().count(): 1441638200000180 next_second: 2015-09-07T15:03:21
200574ms ntp ntp.cpp:147 read_loop ] received ntp reply from 213.136.0.252:123
200574ms ntp ntp.cpp:161 read_loop ] ntp offset: -670347, round_trip_delay 1375349
200575ms ntp ntp.cpp:166 read_loop ] received stale ntp reply requested at 2015-09-07T15:03:19, send a new time request
200575ms ntp ntp.cpp:77 request_now ] resolving... ["pool.ntp.org",123]
200576ms ntp ntp.cpp:81 request_now ] sending request to 129.250.35.250:123
200601ms ntp ntp.cpp:147 read_loop ] received ntp reply from 129.250.35.250:123
200601ms ntp ntp.cpp:161 read_loop ] ntp offset: 3814, round_trip_delay 25671
200601ms ntp ntp.cpp:177 read_loop ] ntp_delta_time updated to 3814
Code: [Select]199164ms th_a thread.cpp:95 thread ] name:p2p tid:4535488512
199164ms ntp ntp.cpp:77 request_now ] resolving... ["pool.ntp.org",123]
199199ms ntp ntp.cpp:81 request_now ] sending request to 213.136.0.252:123
199199ms th_a application.cpp:117 reset_p2p_node ] Adding seed node 104.236.118.105:1776
199200ms th_a application.cpp:129 reset_p2p_node ] Configured p2p node to listen on 0.0.0.0:51700
199202ms th_a application.cpp:179 reset_websocket_serv ] Configured websocket rpc to listen on 127.0.0.1:8090
199203ms th_a witness.cpp:117 plugin_startup ] witness plugin: plugin_startup() begin
199203ms th_a witness.cpp:124 plugin_startup ] Launching block production for 1 witnesses.
199203ms th_a witness.cpp:146 schedule_production_ ] now.time_since_epoch().count(): 1441638199203280 next_second: 2015-09-07T15:03:20
199203ms th_a witness.cpp:131 plugin_startup ] witness plugin: plugin_startup() end
199203ms th_a main.cpp:165 main ] Started witness node on a chain with 0 blocks.
199203ms th_a main.cpp:166 main ] Chain ID is 57a6462f35bb7fe448d22b3a8b61dd67663bfdd00b4e4e969f0f5b502813b6c5
199313ms th_a api.cpp:40 database_api ] creating database api 4527827232
199314ms th_a api.cpp:40 database_api ] creating database api 4527828384
200000ms th_a witness.cpp:176 block_production_loo ] Not producing block because production is disabled until we receive a recent block (see: --enable-stale-production)
200000ms th_a witness.cpp:146 schedule_production_ ] now.time_since_epoch().count(): 1441638200000180 next_second: 2015-09-07T15:03:21
200574ms ntp ntp.cpp:147 read_loop ] received ntp reply from 213.136.0.252:123
200574ms ntp ntp.cpp:161 read_loop ] ntp offset: -670347, round_trip_delay 1375349
200575ms ntp ntp.cpp:166 read_loop ] received stale ntp reply requested at 2015-09-07T15:03:19, send a new time request
200575ms ntp ntp.cpp:77 request_now ] resolving... ["pool.ntp.org",123]
200576ms ntp ntp.cpp:81 request_now ] sending request to 129.250.35.250:123
200601ms ntp ntp.cpp:147 read_loop ] received ntp reply from 129.250.35.250:123
200601ms ntp ntp.cpp:161 read_loop ] ntp offset: 3814, round_trip_delay 25671
200601ms ntp ntp.cpp:177 read_loop ] ntp_delta_time updated to 3814
hey when trying to start witness node after registered etc .. i'm getting this message .. do i'm anything wrong here?
graphene should be registered as a witness now!
923000ms th_a witness.cpp:223 block_production_loo ] Not producing block because slot has not yet arrived
923000ms th_a witness.cpp:187 schedule_production_ ] now.time_since_epoch().count(): 1441638922998106 next_second: 2015-09-07T15:15:24
924000ms th_a witness.cpp:223 block_production_loo ] Not producing block because slot has not yet arrived
924001ms th_a witness.cpp:187 schedule_production_ ] now.time_since_epoch().count(): 1441638923999016 next_second: 2015-09-07T15:15:25
925000ms th_a witness.cpp:220 block_production_loo ] Not producing block because it isn't my turn
925000ms th_a witness.cpp:187 schedule_production_ ] now.time_since_epoch().count(): 1441638924998096 next_second: 2015-09-07T15:15:26
925122ms th_a application.cpp:356 handle_block ] Got block #101560 from network
926000ms th_a witness.cpp:223 block_production_loo ] Not producing block because slot has not yet arrived
926001ms th_a witness.cpp:187 schedule_production_ ] now.time_since_epoch().count(): 1441638925998416 next_second: 2015-09-07T15:15:27
999000ms th_a witness.cpp:223 block_production_loo ] Not producing block because slot has not yet arrived
999000ms th_a witness.cpp:187 schedule_production_ ] now.time_since_epoch().count(): 1441638998998117 next_second: 2015-09-07T15:16:40
1000004ms th_a witness.cpp:214 block_production_loo ] Generated block #{"n":101574,"t":"2015-09-07T15:16:40","c":"2015-09-07T15:16:40"} with timestamp {"n":101574,"t":"2
015-09-07T15:16:40","c":"2015-09-07T15:16:40"} at time {"n":101574,"t":"2015-09-07T15:16:40","c":"2015-09-07T15:16:40"}
1000004ms th_a witness.cpp:187 schedule_production_ ] now.time_since_epoch().count(): 1441639000001898 next_second: 2015-09-07T15:16:41
apt-get install ntp
dump_private_keys
[[
"GPH6xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxV",
"5xxx<private>"
],[
"GPH6xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxj",
"5xxx<private>"
],[
"GPH7xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxa",
"5xxx<private>"
]
]
get_witness "e-v"
{
"id": "1.6.5249",
"witness_account": "1.2.25428",
"last_aslot": 0,
"signing_key": "GPH5xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxX",
"vote_id": "1:5465",
"total_votes": 0,
"url": "url-to-proposal"
}
Almost there with my witness - just having trouble finding/adding my witness private key. My witness signing key (starting GPH5..) isn't listed in array output from:Code: [Select]dump_private_keys
[[
"GPH6xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxV",
"5xxx<private>"
],[
"GPH6xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxj",
"5xxx<private>"
],[
"GPH7xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxa",
"5xxx<private>"
]
]
Is this simply because I don't have any/enough votes yet?Code: [Select]get_witness "e-v"
{
"id": "1.6.5249",
"witness_account": "1.2.25428",
"last_aslot": 0,
"signing_key": "GPH5xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxX",
"vote_id": "1:5465",
"total_votes": 0,
"url": "url-to-proposal"
}
If so I'd be grateful for some, as I'd like to see if my witness can sign blocks on the AWS free-tier VPS - all running fine on 1GB RAM so far :)
Almost there with my witness - just having trouble finding/adding my witness private key. My witness signing key (starting GPH5..) isn't listed in array output from:Voted in.Code: [Select]dump_private_keys
[[
"GPH6xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxV",
"5xxx<private>"
],[
"GPH6xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxj",
"5xxx<private>"
],[
"GPH7xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxa",
"5xxx<private>"
]
]
Is this simply because I don't have any/enough votes yet?Code: [Select]get_witness "e-v"
{
"id": "1.6.5249",
"witness_account": "1.2.25428",
"last_aslot": 0,
"signing_key": "GPH5xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxX",
"vote_id": "1:5465",
"total_votes": 0,
"url": "url-to-proposal"
}
If so I'd be grateful for some, as I'd like to see if my witness can sign blocks on the AWS free-tier VPS - all running fine on 1GB RAM so far :)
Thanks @abit, it is still not showing up so maybe you are right, I'll watch that issue. Sadly I didn't have the foresight to copy it down :-\I'd like to suggest that you could try start a new chain or maybe a new witness_node (don't sync with current network) and create the witness again, maybe you'll get the keys. I noticed that my keys are same every time in the past 4 test networks.
As there isn't a destroy_witness command I guess I'd have to start with a new account to generate a new witness - not sure I can face that right now!
Many thanks for your help anyhow
Thanks @abit, it is still not showing up so maybe you are right, I'll watch that issue. Sadly I didn't have the foresight to copy it down :-\I'd like to suggest that you could try start a new chain or maybe a new witness_node (don't sync with current network) and create the witness again, maybe you'll get the keys. I noticed that my keys are same every time in the past 4 test networks.
As there isn't a destroy_witness command I guess I'd have to start with a new account to generate a new witness - not sure I can face that right now!
Many thanks for your help anyhow
Is there a way to claim witness pay yet? It would help to fund testing :).
get_object "1.13.178"
[{
"id": "1.13.178",
"owner": "1.2.67159",
"balance": {
"amount": 437000000,
"asset_id": "1.3.0"
get_vesting_balances in.abit
withdraw_vesting in.abit 100 CORE true
Hmm maybe registering a new witness account is much easier.Thanks @abit, it is still not showing up so maybe you are right, I'll watch that issue. Sadly I didn't have the foresight to copy it down :-\I'd like to suggest that you could try start a new chain or maybe a new witness_node (don't sync with current network) and create the witness again, maybe you'll get the keys. I noticed that my keys are same every time in the past 4 test networks.
As there isn't a destroy_witness command I guess I'd have to start with a new account to generate a new witness - not sure I can face that right now!
Many thanks for your help anyhow
Good plan, will try that
I fixed some crashes with the RPC code today. In case anyone has experienced witness node crashes.
I fixed some crashes with the RPC code today. In case anyone has experienced witness node crashes.
Will we need a new test net or should this sync with current network?
I fixed some crashes with the RPC code today. In case anyone has experienced witness node crashes.
Will we need a new test net or should this sync with current network?
I think it is compatible, because I'm running on the latest commit version.
Same hereI fixed some crashes with the RPC code today. In case anyone has experienced witness node crashes.
Will we need a new test net or should this sync with current network?
I think it is compatible, because I'm running on the latest commit version.
2838767ms th_a wallet.cpp:2805 import_balance ] balances: []
10 assert_exception: Assert Exception
operations.size() > 0: A transaction must have at least one operation
{"trx":{"ref_block_num":0,"ref_block_prefix":0,"expiration":"1970-01-01T00:00:00","operations":[],"extensions":[]}}
th_a transaction.cpp:51 validate
{"name_or_id":"bitshares-argentina"}
th_a wallet.cpp:2846 import_balance
bitshares-argentina node is back with last commit version.I have seen that error when attempting to claim a key that doesn't have any balance. It is not going to be your signing key or active key that has a balance associated with it.
nathan seems to have only ~1900 CORE, not enough to vote me in.
Also getting this error when trying to import_balance from a pre snapshot balance key:Code: [Select]2838767ms th_a wallet.cpp:2805 import_balance ] balances: []
10 assert_exception: Assert Exception
operations.size() > 0: A transaction must have at least one operation
{"trx":{"ref_block_num":0,"ref_block_prefix":0,"expiration":"1970-01-01T00:00:00","operations":[],"extensions":[]}}
th_a transaction.cpp:51 validate
{"name_or_id":"bitshares-argentina"}
th_a wallet.cpp:2846 import_balance
bitshares-argentina node is back with last commit version.
nathan seems to have only ~1900 CORE, not enough to vote me in.
Also getting this error when trying to import_balance from a pre snapshot balance key:Code: [Select]2838767ms th_a wallet.cpp:2805 import_balance ] balances: []
10 assert_exception: Assert Exception
operations.size() > 0: A transaction must have at least one operation
{"trx":{"ref_block_num":0,"ref_block_prefix":0,"expiration":"1970-01-01T00:00:00","operations":[],"extensions":[]}}
th_a transaction.cpp:51 validate
{"name_or_id":"bitshares-argentina"}
th_a wallet.cpp:2846 import_balance
get_witness bitshares-argentina
{
"id": "1.6.708",
"witness_account": "1.2.8573",
"last_aslot": 0,
"signing_key": "GPH7B4bszRYW5SKGFUKuM6ta95MUR81ZsjUTyxLAn4Pezu5Ck9xsw",
"vote_id": "1:707",
"total_votes": 978271461,
"url": ""
}
]
Nice catch abit.
Is there a way to claim witness pay yet? It would help to fund testing :).
get_object "1.13.178"
[{
"id": "1.13.178",
"owner": "1.2.67159",
"balance": {
"amount": 437000000,
"asset_id": "1.3.0"
Yes, commands here:Code: [Select]get_vesting_balances in.abit
withdraw_vesting in.abit 100 CORE true
I think you're allowed to withdraw 4370 CORE.
unlocked >>> withdraw_vesting 1.13.173 23957 CORE true
withdraw_vesting 1.13.173 23957 CORE true
{
"ref_block_num": 46692,
"ref_block_prefix": 3121941031,
"expiration": "2015-09-08T09:14:30",
"operations": [[
33,{
"fee": {
"amount": 100000,
"asset_id": "1.3.0"
},
"vesting_balance": "1.13.173",
"owner": "1.2.63354",
"amount": {
"amount": 2395700000,
"asset_id": "1.3.0"
}
}
]
],
"extensions": [],
"signatures": [
"1f51a42c36cb50c3085808e14303d6e2edc407ef6d83f4d2524d438d10e4e5598818baccacf785f48f82ea8a8074975182eea23dd885ce7ac6088c2d219a19980a"
]
}
This will make having enough core to test much much easier.Same hereSame hereI fixed some crashes with the RPC code today. In case anyone has experienced witness node crashes.
Will we need a new test net or should this sync with current network?
I think it is compatible, because I'm running on the latest commit version.
get_witness delegate.xeldal
{
"id": "1.6.1625",
"witness_account": "1.2.22412",
"last_aslot": 61295,
"signing_key": "G.................5",
"pay_vb": "1.13.179",
"vote_id": "1:1624",
"total_votes": "89849588541",
"url": ""
}
"head_block_num": 156986,
"head_block_id": "0002653a9daea9b627f0985b5f8f1061f3eb2afe",
"head_block_age": "7 seconds old",
"next_maintenance_time": "2 minutes in the future",
"chain_id": "ecbde738ba0b319cb4d266e613b200d010da8b37313c20aec03f9c8e2d9b35e3",
"active_witnesses": [
I've yet to see my witness produce a block.
Not sure what I'm doing wrong. The witness is all synced and it shows I've got votes but my ID is never listed under active witnesses and never produces anything. as far as I can tell.Code: [Select]get_witness delegate.xeldal
{
"id": "1.6.1625",
"witness_account": "1.2.22412",
"last_aslot": 61295,
"signing_key": "G.................5",
"pay_vb": "1.13.179",
"vote_id": "1:1624",
"total_votes": "89849588541",
"url": ""
}Code: [Select]"head_block_num": 156986,
"head_block_id": "0002653a9daea9b627f0985b5f8f1061f3eb2afe",
"head_block_age": "7 seconds old",
"next_maintenance_time": "2 minutes in the future",
"chain_id": "ecbde738ba0b319cb4d266e613b200d010da8b37313c20aec03f9c8e2d9b35e3",
"active_witnesses": [
Updating in flight via gogo wifi. Witness ubiquitous 8)
Updating in flight via gogo wifi. Witness ubiquitous 8)
World's first Mike high witness?
2015-09-12T18:10:33 th_a:invoke handle_transaction handle_transaction ] Got transaction from network application.cp
p:422
2015-09-12T18:10:33 th_a:invoke handle_transaction handle_transaction ] Got transaction from network application.cp
p:422
2015-09-12T18:10:33 th_a:invoke handle_transaction handle_transaction ] Got transaction from network application.cp
p:422
2015-09-12T18:10:36 th_a:invoke handle_block handle_block ] Got block #171686 from network application.cp
p:383
2015-09-12T18:10:36 th_a:invoke handle_block push_block ] new_block.block_num(): 171686 new_block.id(): 00029ea6553cbe880a9e
0be95c9a0deaef46f1cf db_block.cpp:97
2015-09-12T18:10:36 th_a:invoke handle_block ~pending_transaction ] Pending transaction became invalid after switching to block 00029e
a6553cbe880a9e0be95c9a0deaef46f1cf db_with.hpp:80
2015-09-12T18:10:36 th_a:invoke handle_block ~pending_transaction ] The invalid pending transaction is {"ref_block_num":40613,"ref_blo
ck_prefix":4073857719,"expiration":"2015-09-12T18:11:00","operations":[[5,{"fee":{"amount":14648,"asset_id":"1.3.0"},"registrar":"1.2.
116","referrer":"1.2.116","referrer_percent":0,"name":"hello-test1","owner":{"weight_threshold":1,"account_auths":[],"key_auths":[["GP
H7PuZQts1QJYkHzXvKA9hbZcww8o9xxmh2DNihXb9onb7jfPXv5",1]],"address_auths":[]},"active":{"weight_threshold":1,"account_auths":[],"key_au
ths":[["GPH5BTF7xh1RzGB9vL8Rvk6kcNr5EC3cRBAC32yyqs5Pas56Gwubz",1]],"address_auths":[]},"options":{"memo_key":"GPH5BTF7xh1RzGB9vL8Rvk6k
cNr5EC3cRBAC32yyqs5Pas56Gwubz","voting_account":"1.2.5","num_witness":0,"num_committee":0,"votes":[],"extensions":[]},"extensions":[]}
]],"extensions":[],"signatures":["1f35d7089c47caa4d3423b9779010495d8058e04ea6b5deeeac8aea9cc4e3d7720698e97dd6eb460ba9f7e8547f18f22173c
fa4ee523bd1cca774f3c68c993d68d"],"operation_results":[[1,"1.2.97867"]]} db_with.hpp:81
2015-09-12T18:10:36 th_a:invoke handle_block ~pending_transaction ] The invalid pending transaction caused exception {"code":10,"name"
:"assert_exception","message":"Assert Exception","stack":[{"context":{"level":"error","file":"db_block.cpp","line":507,"method":"_appl
y_transaction","hostname":"","thread_name":"th_a","timestamp":"2015-09-12T18:10:36"},"format":"(skip & skip_transaction_dupe_check) ||
trx_idx.indices().get<by_trx_id>().find(trx_id) == trx_idx.indices().get<by_trx_id>().end(): ","data":{}},{"context":{"level":"warn",
"file":"db_block.cpp","line":563,"method":"_apply_transaction","hostname":"","thread_name":"th_a","timestamp":"2015-09-12T18:10:36"},"
format":"","data":{"trx":{"ref_block_num":40613,"ref_block_prefix":4073857719,"expiration":"2015-09-12T18:11:00","operations":[[5,{"fe
e":{"amount":14648,"asset_id":"1.3.0"},"registrar":"1.2.116","referrer":"1.2.116","referrer_percent":0,"name":"hello-test1","owner":{"
weight_threshold":1,"account_auths":[],"key_auths":[["GPH7PuZQts1QJYkHzXvKA9hbZcww8o9xxmh2DNihXb9onb7jfPXv5",1]],"address_auths":[]},"
active":{"weight_threshold":1,"account_auths":[],"key_auths":[["GPH5BTF7xh1RzGB9vL8Rvk6kcNr5EC3cRBAC32yyqs5Pas56Gwubz",1]],"address_au
ths":[]},"options":{"memo_key":"GPH5BTF7xh1RzGB9vL8Rvk6kcNr5EC3cRBAC32yyqs5Pas56Gwubz","voting_account":"1.2.5","num_witness":0,"num_c
ommittee":0,"votes":[],"extensions":[]},"extensions":[]}]],"extensions":[],"signatures":["1f35d7089c47caa4d3423b9779010495d8058e04ea6b
5deeeac8aea9cc4e3d7720698e97dd6eb460ba9f7e8547f18f22173cfa4ee523bd1cca774f3c68c993d68d"]}}},{"context":{"level":"warn","file":"db_bloc
k.cpp","line":205,"method":"push_transaction","hostname":"","thread_name":"th_a","timestamp":"2015-09-12T18:10:36"},"format":"","data"
:{"trx":{"ref_block_num":40613,"ref_block_prefix":4073857719,"expiration":"2015-09-12T18:11:00","operations":[[5,{"fee":{"amount":1464
8,"asset_id":"1.3.0"},"registrar":"1.2.116","referrer":"1.2.116","referrer_percent":0,"name":"hello-test1","owner":{"weight_threshold"
:1,"account_auths":[],"key_auths":[["GPH7PuZQts1QJYkHzXvKA9hbZcww8o9xxmh2DNihXb9onb7jfPXv5",1]],"address_auths":[]},"active":{"weight_
threshold":1,"account_auths":[],"key_auths":[["GPH5BTF7xh1RzGB9vL8Rvk6kcNr5EC3cRBAC32yyqs5Pas56Gwubz",1]],"address_auths":[]},"options
":{"memo_key":"GPH5BTF7xh1RzGB9vL8Rvk6kcNr5EC3cRBAC32yyqs5Pas56Gwubz","voting_account":"1.2.5","num_witness":0,"num_committee":0,"votes":[],"extensions":[]},"extensions":[]}]],"extensions":[],"signatures":["1f35d7089c47caa4d3423b9779010495d8058e04ea6b5deeeac8aea9cc4e3d7720698e97dd6eb460ba9f7e8547f18f22173cfa4ee523bd1cca774f3c68c993d68d"]}}}]} db_with.hpp:82
2015-09-12T18:10:45 th_a:invoke handle_block handle_block ] Got block #171687 from network application.cpp:383
2015-09-12T18:10:45 th_a:invoke handle_block push_block ] new_block.block_num(): 171687 new_block.id(): 00029ea7c3d673800f20efc331f42b4235cee25c db_block.cpp:97
2015-09-12T18:10:45 th_a:invoke handle_block ~pending_transaction ] Pending transaction became invalid after switching to block 00029ea7c3d673800f20efc331f42b4235cee25c db_with.hpp:80
2015-09-12T18:10:45 th_a:invoke handle_block ~pending_transaction ] The invalid pending transaction is {"ref_block_num":40613,"ref_block_prefix":4073857719,"expiration":"2015-09-12T18:11:00","operations":[[0,{"fee":{"amount":2105468,"asset_id":"1.3.0"},"from":"1.2.116","to":"1.2.97867","amount":{"amount":100000000,"asset_id":"1.3.0"},"memo":{"from":"GPH5WCj1mMiiqEE4QRs7xhaFfSaiFroejUp3GuZE9wvfue9nxhPPn","to":"GPH5BTF7xh1RzGB9vL8Rvk6kcNr5EC3cRBAC32yyqs5Pas56Gwubz","nonce":"18015840590915049172","message":"4df7d8d7d3392b82281fd353f4ac8edad7a5ec58328ff795838fe77850c4de69"},"extensions":[]}]],"extensions":[],"signatures":["1f3ee5eefd1a1e93be6ddb75b709a8fa1e90c817fdca7f01133c39a29f172cecae53db1bb034397985b33642839a5e1a088dbb55c635352f30cac1fb137b69497a"],"operation_results":[[0,{}]]}
db_with.hpp:81
2015-09-12T18:10:45 th_a:invoke handle_block ~pending_transaction ] The invalid pending transaction caused exception {"code":10,"name":"assert_exception","message":"Assert Exception","stack":[{"context":{"level":"error","file":"db_block.cpp","line":507,"method":"_apply_transaction","hostname":"","thread_name":"th_a","timestamp":"2015-09-12T18:10:45"},"format":"(skip & skip_transaction_dupe_check) || trx_idx.indices().get<by_trx_id>().find(trx_id) == trx_idx.indices().get<by_trx_id>().end(): ","data":{}},{"context":{"level":"warn","file":"db_block.cpp","line":563,"method":"_apply_transaction","hostname":"","thread_name":"th_a","timestamp":"2015-09-12T18:10:45"},"format":"","data":{"trx":{"ref_block_num":40613,"ref_block_prefix":4073857719,"expiration":"2015-09-12T18:11:00","operations":[[0,{"fee":{"amount":2105468,"asset_id":"1.3.0"},"from":"1.2.116","to":"1.2.97867","amount":{"amount":100000000,"asset_id":"1.3.0"},"memo":{"from":"GPH5WCj1mMiiqEE4QRs7xhaFfSaiFroejUp3GuZE9wvfue9nxhPPn","to":"GPH5BTF7xh1RzGB9vL8Rvk6kcNr5EC3cRBAC32yyqs5Pas56Gwubz","nonce":"18015840590915049172","message":"4df7d8d7d3392b82281fd353f4ac8edad7a5ec58328ff795838fe77850c4de69"},"extensions":[]}]],"extensions":[],"signatures":["1f3ee5eefd1a1e93be6ddb75b709a8fa1e90c817fdca7f01133c39a29f172cecae53db1bb034397985b33642839a5e1a088dbb55c635352f30cac1fb137b69497a"]}}},{"context":{"level":"warn","file":"db_block.cpp","line":205,"method":"push_transaction","hostname":"","thread_name":"th_a","timestamp":"2015-09-12T18:10:45"},"format":"","data":{"trx":{"ref_block_num":40613,"ref_block_prefix":4073857719,"expiration":"2015-09-12T18:11:00","operations":[[0,{"fee":{"amount":2105468,"asset_id":"1.3.0"},"from":"1.2.116","to":"1.2.97867","amount":{"amount":100000000,"asset_id":"1.3.0"},"memo":{"from":"GPH5WCj1mMiiqEE4QRs7xhaFfSaiFroejUp3GuZE9wvfue9nxhPPn","to":"GPH5BTF7xh1RzGB9vL8Rvk6kcNr5EC3cRBAC32yyqs5Pas56Gwubz","nonce":"18015840590915049172","message":"4df7d8d7d3392b82281fd353f4ac8edad7a5ec58328ff795838fe77850c4de69"},"extensions":[]}]],"extensions":[],"signatures":["1f3ee5eefd1a1e93be6ddb75b709a8fa1e90c817fdca7f01133c39a29f172cecae53db1bb034397985b33642839a5e1a088dbb55c635352f30cac1fb137b69497a"]}}}]} db_with.hpp:82
2015-09-12T18:17:21 th_a:invoke handle_transaction handle_transaction ] Got transaction from network application.cp
p:422
2015-09-12T18:17:25 th_a:invoke handle_block handle_block ] Got block #171750 from network application.cp
p:383
2015-09-12T18:17:25 th_a:invoke handle_block push_block ] new_block.block_num(): 171750 new_block.id(): 00029ee60757de7e83bd
50d871de89f73ae312fe db_block.cpp:97
2015-09-12T18:17:25 th_a:invoke handle_block ~pending_transaction ] Pending transaction became invalid after switching to block 00029ee60757de7e83bd50d871de89f73ae312fe db_with.hpp:80
2015-09-12T18:17:25 th_a:invoke handle_block ~pending_transaction ] The invalid pending transaction is {"ref_block_num":40676,"ref_block_prefix":3655628590,"expiration":"2015-09-12T18:29:50","operations":[[1,{"fee":{"amount":500000,"asset_id":"1.3.0"},"seller":"1.2.97867","amount_to_sell":{"amount":100000,"asset_id":"1.3.0"},"min_to_receive":{"amount":2900000,"asset_id":"1.3.325"},"expiration":"2020-09-12T18:19:00","fill_or_kill":false,"extensions":[]}]],"extensions":[],"signatures":["1f1af6117c8f0857b1be1904b66c594e8ef29facc0307417c0e9c6bbb0bde829ad10d3642c14ff60af3cb222d1d3a89a9f12471d160900992f671509f04d928b4d"],"operation_results":[[1,"1.7.53"]]}
db_with.hpp:81
2015-09-12T18:17:25 th_a:invoke handle_block ~pending_transaction ] The invalid pending transaction caused exception {"code":10,"name":"assert_exception","message":"Assert Exception","stack":[{"context":{"level":"error","file":"db_block.cpp","line":507,"method":"_apply_transaction","hostname":"","thread_name":"th_a","timestamp":"2015-09-12T18:17:25"},"format":"(skip & skip_transaction_dupe_check) || trx_idx.indices().get<by_trx_id>().find(trx_id) == trx_idx.indices().get<by_trx_id>().end(): ","data":{}},{"context":{"level":"warn","file":"db_block.cpp","line":563,"method":"_apply_transaction","hostname":"","thread_name":"th_a","timestamp":"2015-09-12T18:17:25"},"format":"","data":{"trx":{"ref_block_num":40676,"ref_block_prefix":3655628590,"expiration":"2015-09-12T18:29:50","operations":[[1,{"fee":{"amount":500000,"asset_id":"1.3.0"},"seller":"1.2.97867","amount_to_sell":{"amount":100000,"asset_id":"1.3.0"},"min_to_receive":{"amount":2900000,"asset_id":"1.3.325"},"expiration":"2020-09-12T18:19:00","fill_or_kill":false,"extensions":[]}]],"extensions":[],"signatures":["1f1af6117c8f0857b1be1904b66c594e8ef29facc0307417c0e9c6bbb0bde829ad10d3642c14ff60af3cb222d1d3a89a9f12471d160900992f671509f04d928b4d"]}}},{"context":{"level":"warn","file":"db_block.cpp","line":205,"method":"push_transaction","hostname":"","thread_name":"th_a","timestamp":"2015-09-12T18:17:25"},"format":"","data":{"trx":{"ref_block_num":40676,"ref_block_prefix":3655628590,"expiration":"2015-09-12T18:29:50","operations":[[1,{"fee":{"amount":500000,"asset_id":"1.3.0"},"seller":"1.2.97867","amount_to_sell":{"amount":100000,"asset_id":"1.3.0"},"min_to_receive":{"amount":2900000,"asset_id":"1.3.325"},"expiration":"2020-09-12T18:19:00","fill_or_kill":false,"extensions":[]}]],"extensions":[],"signatures":["1f1af6117c8f0857b1be1904b66c594e8ef29facc0307417c0e9c6bbb0bde829ad10d3642c14ff60af3cb222d1d3a89a9f12471d160900992f671509f04d928b4d"]}}}]} db_with.hpp:82
2015-09-12T18:51:20 th_a:invoke handle_block handle_block ] Got block #172075 from network application.cp
p:383
2015-09-12T18:51:20 th_a:invoke handle_block push_block ] new_block.block_num(): 172075 new_block.id(): 0002a02b392ec29a32bb
de7460a81895ab5e4d39 db_block.cpp:97
2015-09-12T18:51:20 th_a:invoke handle_block _push_block ] Failed to push new block:
10 assert_exception: Assert Exception
vbo.is_withdraw_allowed( now, op.amount ):
{}
th_a vesting_balance_evaluator.cpp:103 do_evaluate
{"op":{"fee":{"amount":100000,"asset_id":"1.3.0"},"vesting_balance":"1.13.182","owner":"1.2.14540","amount":{"amount":28200000,"as
set_id":"1.3.0"}}}
th_a vesting_balance_evaluator.cpp:109 do_evaluate
{}
th_a evaluator.cpp:42 start_evaluate
{}
th_a db_block.cpp:580 apply_operation
{"trx":{"ref_block_num":41002,"ref_block_prefix":1338163065,"expiration":"2015-09-12T18:51:45","operations":[[33,{"fee":{"amount":100000,"asset_id":"1.3.0"},"vesting_balance":"1.13.182","owner":"1.2.14540","amount":{"amount":28200000,"asset_id":"1.3.0"}}]],"extensions":[],"signatures":["1f48e31b530bf16c32b99809c509de9938f639c68bbb523819ecef3aa03d703c2d6cad44e48f56c8aaf091098022a2cd8a860f2e4cab1c16991221482317e14bee"]}}
th_a db_block.cpp:563 _apply_transaction
{"next_block.block_num()":172075}
th_a db_block.cpp:468 _apply_block db_block.cpp:180
2015-09-12T18:51:20 th_a:invoke handle_block handle_block ] Error when pushing block:
10 assert_exception: Assert Exception
vbo.is_withdraw_allowed( now, op.amount ):
{}
th_a vesting_balance_evaluator.cpp:103 do_evaluate
{"op":{"fee":{"amount":100000,"asset_id":"1.3.0"},"vesting_balance":"1.13.182","owner":"1.2.14540","amount":{"amount":28200000,"asset_id":"1.3.0"}}}
th_a vesting_balance_evaluator.cpp:109 do_evaluate
{}
th_a evaluator.cpp:42 start_evaluate
{}
th_a db_block.cpp:580 apply_operation
{"trx":{"ref_block_num":41002,"ref_block_prefix":1338163065,"expiration":"2015-09-12T18:51:45","operations":[[33,{"fee":{"amount":100000,"asset_id":"1.3.0"},"vesting_balance":"1.13.182","owner":"1.2.14540","amount":{"amount":28200000,"asset_id":"1.3.0"}}]],"extensions":[],"signatures":["1f48e31b530bf16c32b99809c509de9938f639c68bbb523819ecef3aa03d703c2d6cad44e48f56c8aaf091098022a2cd8a860f2e4cab1c16991221482317e14bee"]}}
th_a db_block.cpp:563 _apply_transaction
{"next_block.block_num()":172075}
th_a db_block.cpp:468 _apply_block
{"new_block":{"previous":"0002a02a79bfc24fc1bcc0db170fb597260c44aa","timestamp":"2015-09-12T18:51:20","witness":"1.6.9","transaction_merkle_root":"eedebcfcebf08658829eaa863a420f0ac36d569e","extensions":[],"witness_signature":"1f767254f096a7223017f32c6e4ca81f0db4f981c37fe43de4f2605a4bf4015ecc2bf90dcb738dee71321d7b3de12dcb179ba7a41c8eb7e5a1333ae10dc1ec0bd9","transactions":[{"ref_block_num":41002,"ref_block_prefix":1338163065,"expiration":"2015-09-12T18:51:45","operations":[[33,{"fee":{"amount":100000,"asset_id":"1.3.0"},"vesting_balance":"1.13.182","owner":"1.2.14540","amount":{"amount":28200000,"asset_id":"1.3.0"}}]],"extensions":[],"signatures":["1f48e31b530bf16c32b99809c509de9938f639c68bbb523819ecef3aa03d703c2d6cad44e48f56c8aaf091098022a2cd8a860f2e4cab1c16991221482317e14bee"],"operation_results":[[0,{}]]}]}}
th_a db_block.cpp:186 _push_block application.cpp:409
3599870ms th_a application.cpp:422
handle_transaction ] Got transaction
from network
1ms th_a db_block.cpp:97
push_block ] new_block.block_num():
172076 new_block.id(): 0002a02c211ed6b4ab4ab
41989d9f0007ad02fef
witness_node: /home/clayop/graphene/libraries
/chain/db_maint.cpp:333: void graphene::chain
::database::process_budget(): Assertion `time
_to_maint > 0' failed.
Aborted (core dumped)
witness node stopped with the following errorCode: [Select]3599870ms th_a application.cpp:422
handle_transaction ] Got transaction
from network
1ms th_a db_block.cpp:97
push_block ] new_block.block_num():
172076 new_block.id(): 0002a02c211ed6b4ab4ab
41989d9f0007ad02fef
witness_node: /home/clayop/graphene/libraries
/chain/db_maint.cpp:333: void graphene::chain
::database::process_budget(): Assertion `time
_to_maint > 0' failed.
Aborted (core dumped)
2539393ms th_a application.cpp:422 handle_transaction ] Got transaction from network
2539533ms th_a application.cpp:691 get_blockchain_synop ] synopsis: ["0002e05380b9755fa598eaebeb6891efa2e7414c","0002e0656d75d264c5df9bbb87a2734af7a05997","0002e06e437036d12b42b0a1fec25ff02ab73d3f","0002e0736b5f6526008a20e525a419614d794da4","0002e075dfb2ef879b32010a09c482f7d68e5181","0002e076f489de970f945b052badedc7028c58c0"]
2539559ms th_a application.cpp:422 handle_transaction ] Got transaction from network
witness_node: /home/user/src/graphene/libraries/fc/src/thread/thread_d.hpp:370: bool fc::thread_d::start_next_fiber(bool): Assertion `std::current_exception() == std::exception_ptr()' failed.
Program received signal SIGABRT, Aborted.
0x00007ffff6c01cc9 in __GI_raise (sig=sig@entry=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:56
56 ../nptl/sysdeps/unix/sysv/linux/raise.c: No such file or directory.
(gdb) bt
#0 0x00007ffff6c01cc9 in __GI_raise (sig=sig@entry=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:56
#1 0x00007ffff6c050d8 in __GI_abort () at abort.c:89
#2 0x00007ffff6bfab86 in __assert_fail_base (fmt=0x7ffff6d4b830 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n",
assertion=assertion@entry=0x2b72788 "std::current_exception() == std::exception_ptr()", file=file@entry=0x2b72668 "/home/user/src/graphene/libraries/fc/src/thread/thread_d.hpp",
line=line@entry=370, function=function@entry=0x2b73440 <fc::thread_d::start_next_fiber(bool)::__PRETTY_FUNCTION__> "bool fc::thread_d::start_next_fiber(bool)") at assert.c:92
#3 0x00007ffff6bfac32 in __GI___assert_fail (assertion=0x2b72788 "std::current_exception() == std::exception_ptr()",
file=0x2b72668 "/home/user/src/graphene/libraries/fc/src/thread/thread_d.hpp", line=370,
function=0x2b73440 <fc::thread_d::start_next_fiber(bool)::__PRETTY_FUNCTION__> "bool fc::thread_d::start_next_fiber(bool)") at assert.c:101
#4 0x000000000261c867 in fc::thread_d::start_next_fiber (this=0x3349b50, reschedule=true) at /home/user/src/graphene/libraries/fc/src/thread/thread_d.hpp:370
#5 0x00000000026164d5 in fc::thread::yield (this=0x32f8cd0, reschedule=true) at /home/user/src/graphene/libraries/fc/src/thread/thread.cpp:268
#6 0x0000000002617142 in fc::yield () at /home/user/src/graphene/libraries/fc/src/thread/thread.cpp:353
#7 0x0000000002629c95 in fc::spin_yield_lock::lock (this=0x7fffdcb6f43c) at /home/user/src/graphene/libraries/fc/src/thread/spin_yield_lock.cpp:41
#8 0x000000000262861c in fc::unique_lock<fc::spin_yield_lock&>::lock (this=0x7ffff6100c50) at /home/user/src/graphene/libraries/fc/include/fc/thread/unique_lock.hpp:21
#9 0x0000000002628537 in fc::unique_lock<fc::spin_yield_lock&>::unique_lock (this=0x7ffff6100c50, l=...) at /home/user/src/graphene/libraries/fc/include/fc/thread/unique_lock.hpp:17
#10 0x000000000262833e in fc::promise_base::_set_value (this=0x7fffdcb6f430, s=0x0) at /home/user/src/graphene/libraries/fc/src/thread/future.cpp:115
#11 0x0000000002627c08 in fc::promise_base::set_exception (this=0x7fffdcb6f430, e=...) at /home/user/src/graphene/libraries/fc/src/thread/future.cpp:47
#12 0x0000000002628ecf in fc::task_base::run_impl (this=0x7fffdcb6f3d0) at /home/user/src/graphene/libraries/fc/src/thread/task.cpp:55
#13 0x00000000026289bc in fc::task_base::run (this=0x7fffdcb6f3d0) at /home/user/src/graphene/libraries/fc/src/thread/task.cpp:32
#14 0x000000000261d3dc in fc::thread_d::run_next_task (this=0x3349b50) at /home/user/src/graphene/libraries/fc/src/thread/thread_d.hpp:498
#15 0x000000000261d880 in fc::thread_d::process_tasks (this=0x3349b50) at /home/user/src/graphene/libraries/fc/src/thread/thread_d.hpp:547
#16 0x000000000261cef3 in fc::thread_d::start_process_tasks (my=53779280) at /home/user/src/graphene/libraries/fc/src/thread/thread_d.hpp:475
#17 0x0000000002975a31 in make_fcontext () at libs/context/src/asm/make_x86_64_sysv_elf_gas.S:64
#18 0x0000000000000000 in ?? ()
(gdb)
witness node stopped with the following errorWhich commit were you running with? My nodes are unable to go over block 172075, but haven't crash. Looks like your issue is related to block 172076.Code: [Select]3599870ms th_a application.cpp:422
handle_transaction ] Got transaction
from network
1ms th_a db_block.cpp:97
push_block ] new_block.block_num():
172076 new_block.id(): 0002a02c211ed6b4ab4ab
41989d9f0007ad02fef
witness_node: /home/clayop/graphene/libraries
/chain/db_maint.cpp:333: void graphene::chain
::database::process_budget(): Assertion `time
_to_maint > 0' failed.
Aborted (core dumped)
973361ms th_a db_with.hpp:82 ~pending_transaction ] The invalid pending transaction caused exception {"code":10,"name":"assert_exception","message":"Assert Exception","stack":[{"context":{"level":"error","file":"db_block.cpp","line":507,"method":"_apply_transaction","hostname":"","thread_name":"th_a","timestamp":"2015-09-13T23:16:13"},"format":"(skip & skip_transaction_dupe_check) || trx_idx.indices().get<by_trx_id>().find(trx_id) == trx_idx.indices().get<by_trx_id>().end(): ","data":{}},{"context":{"level":"warn","file":"db_block.cpp","line":563,"method":"_apply_transaction","hostname":"","thread_name":"th_a","timestamp":"2015-09-13T23:16:13"},"format":"","data":{"trx":{"ref_block_num":57237,"ref_block_prefix":141629248,"expiration":"2015-09-13T23:16:57","operations":[[0,{"fee":{"amount":2000000,"asset_id":"1.3.0"},"from":"1.2.63354","to":"1.2.17263","amount":{"amount":1,"asset_id":"1.3.0"},"extensions":[]}]],"extensions":[],"signatures":["206f76beb28d9a139bbb8f061d787a83aa3d0565040be653ae2330a802eafbfe0f780f4ec86c269c278eb81d6458d53ea8c0d7bfe78eadae56e124a529d6ffa6bc"]}}},{"context":{"level":"warn","file":"db_block.cpp","line":205,"method":"push_transaction","hostname":"","thread_name":"th_a","timestamp":"2015-09-13T23:16:13"},"format":"","data":{"trx":{"ref_block_num":57237,"ref_block_prefix":141629248,"expiration":"2015-09-13T23:16:57","operations":[[0,{"fee":{"amount":2000000,"asset_id":"1.3.0"},"from":"1.2.63354","to":"1.2.17263","amount":{"amount":1,"asset_id":"1.3.0"},"extensions":[]}]],"extensions":[],"signatures":["206f76beb28d9a139bbb8f061d787a83aa3d0565040be653ae2330a802eafbfe0f780f4ec86c269c278eb81d6458d53ea8c0d7bfe78eadae56e124a529d6ffa6bc"]}}}]}
973391ms th_a application.cpp:421 handle_transaction ] Got transaction from network
973392ms th_a application.cpp:421 handle_transaction ] Got transaction from network
973493ms th_a application.cpp:421 handle_transaction ] Got transaction from network
973558ms th_a application.cpp:421 handle_transaction ] Got transaction from network
973682ms th_a application.cpp:421 handle_transaction ] Got transaction from network
973756ms th_a application.cpp:421 handle_transaction ] Got transaction from network
973823ms th_a application.cpp:421 handle_transaction ] Got transaction from network
witness_node: /home/calabiyau/graphene/libraries/fc/src/thread/thread_d.hpp:370: bool fc::thread_d::start_next_fiber(bool): Assertion `std::current_exception() == std::exception_ptr()' failed.
Aborted (core dumped)
The compilation from Saturday was running smoothly over the weekend. This morning I found the witness crashed:Code: [Select]973361ms th_a db_with.hpp:82 ~pending_transaction ] The invalid pending transaction caused exception {"code":10,"name":"assert_exception","message":"Assert Exception","stack":[{"context":{"level":"error","file":"db_block.cpp","line":507,"method":"_apply_transaction","hostname":"","thread_name":"th_a","timestamp":"2015-09-13T23:16:13"},"format":"(skip & skip_transaction_dupe_check) || trx_idx.indices().get<by_trx_id>().find(trx_id) == trx_idx.indices().get<by_trx_id>().end(): ","data":{}},{"context":{"level":"warn","file":"db_block.cpp","line":563,"method":"_apply_transaction","hostname":"","thread_name":"th_a","timestamp":"2015-09-13T23:16:13"},"format":"","data":{"trx":{"ref_block_num":57237,"ref_block_prefix":141629248,"expiration":"2015-09-13T23:16:57","operations":[[0,{"fee":{"amount":2000000,"asset_id":"1.3.0"},"from":"1.2.63354","to":"1.2.17263","amount":{"amount":1,"asset_id":"1.3.0"},"extensions":[]}]],"extensions":[],"signatures":["206f76beb28d9a139bbb8f061d787a83aa3d0565040be653ae2330a802eafbfe0f780f4ec86c269c278eb81d6458d53ea8c0d7bfe78eadae56e124a529d6ffa6bc"]}}},{"context":{"level":"warn","file":"db_block.cpp","line":205,"method":"push_transaction","hostname":"","thread_name":"th_a","timestamp":"2015-09-13T23:16:13"},"format":"","data":{"trx":{"ref_block_num":57237,"ref_block_prefix":141629248,"expiration":"2015-09-13T23:16:57","operations":[[0,{"fee":{"amount":2000000,"asset_id":"1.3.0"},"from":"1.2.63354","to":"1.2.17263","amount":{"amount":1,"asset_id":"1.3.0"},"extensions":[]}]],"extensions":[],"signatures":["206f76beb28d9a139bbb8f061d787a83aa3d0565040be653ae2330a802eafbfe0f780f4ec86c269c278eb81d6458d53ea8c0d7bfe78eadae56e124a529d6ffa6bc"]}}}]}
973391ms th_a application.cpp:421 handle_transaction ] Got transaction from network
973392ms th_a application.cpp:421 handle_transaction ] Got transaction from network
973493ms th_a application.cpp:421 handle_transaction ] Got transaction from network
973558ms th_a application.cpp:421 handle_transaction ] Got transaction from network
973682ms th_a application.cpp:421 handle_transaction ] Got transaction from network
973756ms th_a application.cpp:421 handle_transaction ] Got transaction from network
973823ms th_a application.cpp:421 handle_transaction ] Got transaction from network
witness_node: /home/calabiyau/graphene/libraries/fc/src/thread/thread_d.hpp:370: bool fc::thread_d::start_next_fiber(bool): Assertion `std::current_exception() == std::exception_ptr()' failed.
Aborted (core dumped)
3169758ms th_a db_with.hpp:82 ~pending_transaction ] The invalid pending transaction caused exception {"code":10,"name":"assert_exception","message":"Assert Exception","stack":[{"context":{"level":"error","file":"db_block.cpp","line":507,"method":"_apply_transaction","hostname":"","thread_name":"th_a","timestamp":"2015-09-14T05:52:49"},"format":"(skip & skip_transaction_dupe_check) || trx_idx.indices().get<by_trx_id>().find(trx_id) == trx_idx.indices().get<by_trx_id>().end(): ","data":{}},{"context":{"level":"warn","file":"db_block.cpp","line":563,"method":"_apply_transaction","hostname":"","thread_name":"th_a","timestamp":"2015-09-14T05:52:49"},"format":"","data":{"trx":{"ref_block_num":60494,"ref_block_prefix":2734476150,"expiration":"2015-09-14T05:56:32","operations":[[0,{"fee":{"amount":2000000,"asset_id":"1.3.0"},"from":"1.2.63354","to":"1.2.17263","amount":{"amount":1,"asset_id":"1.3.0"},"extensions":[]}]],"extensions":[],"signatures":["205c4049776969e7b3fa9ad3ee401a46b757a6b0821dcf2c0c2f4e79c9beac8bf60c37d51f490092708be2348d11228ea6ad225db2d884a96b26c5b990e6746ada"]}}},{"context":{"level":"warn","file":"db_block.cpp","line":205,"method":"push_transaction","hostname":"","thread_name":"th_a","timestamp":"2015-09-14T05:52:49"},"format":"","data":{"trx":{"ref_block_num":60494,"ref_block_prefix":2734476150,"expiration":"2015-09-14T05:56:32","operations":[[0,{"fee":{"amount":2000000,"asset_id":"1.3.0"},"from":"1.2.63354","to":"1.2.17263","amount":{"amount":1,"asset_id":"1.3.0"},"extensions":[]}]],"extensions":[],"signatures":["205c4049776969e7b3fa9ad3ee401a46b757a6b0821dcf2c0c2f4e79c9beac8bf60c37d51f490092708be2348d11228ea6ad225db2d884a96b26c5b990e6746ada"]}}}]}
app.js:38 !!! WebSocket Error wss://graphene.bitshares.org:8090
app.js:38 WebSocket connection to 'wss://graphene.bitshares.org:8090/' failed: Error during WebSocket handshake: Unexpected response code: 502
The compilation from Saturday was running smoothly over the weekend. This morning I found the witness crashed:Code: [Select]973361ms th_a db_with.hpp:82 ~pending_transaction ] The invalid pending transaction caused exception {"code":10,"name":"assert_exception","message":"Assert Exception","stack":[{"context":{"level":"error","file":"db_block.cpp","line":507,"method":"_apply_transaction","hostname":"","thread_name":"th_a","timestamp":"2015-09-13T23:16:13"},"format":"(skip & skip_transaction_dupe_check) || trx_idx.indices().get<by_trx_id>().find(trx_id) == trx_idx.indices().get<by_trx_id>().end(): ","data":{}},{"context":{"level":"warn","file":"db_block.cpp","line":563,"method":"_apply_transaction","hostname":"","thread_name":"th_a","timestamp":"2015-09-13T23:16:13"},"format":"","data":{"trx":{"ref_block_num":57237,"ref_block_prefix":141629248,"expiration":"2015-09-13T23:16:57","operations":[[0,{"fee":{"amount":2000000,"asset_id":"1.3.0"},"from":"1.2.63354","to":"1.2.17263","amount":{"amount":1,"asset_id":"1.3.0"},"extensions":[]}]],"extensions":[],"signatures":["206f76beb28d9a139bbb8f061d787a83aa3d0565040be653ae2330a802eafbfe0f780f4ec86c269c278eb81d6458d53ea8c0d7bfe78eadae56e124a529d6ffa6bc"]}}},{"context":{"level":"warn","file":"db_block.cpp","line":205,"method":"push_transaction","hostname":"","thread_name":"th_a","timestamp":"2015-09-13T23:16:13"},"format":"","data":{"trx":{"ref_block_num":57237,"ref_block_prefix":141629248,"expiration":"2015-09-13T23:16:57","operations":[[0,{"fee":{"amount":2000000,"asset_id":"1.3.0"},"from":"1.2.63354","to":"1.2.17263","amount":{"amount":1,"asset_id":"1.3.0"},"extensions":[]}]],"extensions":[],"signatures":["206f76beb28d9a139bbb8f061d787a83aa3d0565040be653ae2330a802eafbfe0f780f4ec86c269c278eb81d6458d53ea8c0d7bfe78eadae56e124a529d6ffa6bc"]}}}]}
973391ms th_a application.cpp:421 handle_transaction ] Got transaction from network
973392ms th_a application.cpp:421 handle_transaction ] Got transaction from network
973493ms th_a application.cpp:421 handle_transaction ] Got transaction from network
973558ms th_a application.cpp:421 handle_transaction ] Got transaction from network
973682ms th_a application.cpp:421 handle_transaction ] Got transaction from network
973756ms th_a application.cpp:421 handle_transaction ] Got transaction from network
973823ms th_a application.cpp:421 handle_transaction ] Got transaction from network
witness_node: /home/calabiyau/graphene/libraries/fc/src/thread/thread_d.hpp:370: bool fc::thread_d::start_next_fiber(bool): Assertion `std::current_exception() == std::exception_ptr()' failed.
Aborted (core dumped)
Woke up to this tooCode: [Select]3169758ms th_a db_with.hpp:82 ~pending_transaction ] The invalid pending transaction caused exception {"code":10,"name":"assert_exception","message":"Assert Exception","stack":[{"context":{"level":"error","file":"db_block.cpp","line":507,"method":"_apply_transaction","hostname":"","thread_name":"th_a","timestamp":"2015-09-14T05:52:49"},"format":"(skip & skip_transaction_dupe_check) || trx_idx.indices().get<by_trx_id>().find(trx_id) == trx_idx.indices().get<by_trx_id>().end(): ","data":{}},{"context":{"level":"warn","file":"db_block.cpp","line":563,"method":"_apply_transaction","hostname":"","thread_name":"th_a","timestamp":"2015-09-14T05:52:49"},"format":"","data":{"trx":{"ref_block_num":60494,"ref_block_prefix":2734476150,"expiration":"2015-09-14T05:56:32","operations":[[0,{"fee":{"amount":2000000,"asset_id":"1.3.0"},"from":"1.2.63354","to":"1.2.17263","amount":{"amount":1,"asset_id":"1.3.0"},"extensions":[]}]],"extensions":[],"signatures":["205c4049776969e7b3fa9ad3ee401a46b757a6b0821dcf2c0c2f4e79c9beac8bf60c37d51f490092708be2348d11228ea6ad225db2d884a96b26c5b990e6746ada"]}}},{"context":{"level":"warn","file":"db_block.cpp","line":205,"method":"push_transaction","hostname":"","thread_name":"th_a","timestamp":"2015-09-14T05:52:49"},"format":"","data":{"trx":{"ref_block_num":60494,"ref_block_prefix":2734476150,"expiration":"2015-09-14T05:56:32","operations":[[0,{"fee":{"amount":2000000,"asset_id":"1.3.0"},"from":"1.2.63354","to":"1.2.17263","amount":{"amount":1,"asset_id":"1.3.0"},"extensions":[]}]],"extensions":[],"signatures":["205c4049776969e7b3fa9ad3ee401a46b757a6b0821dcf2c0c2f4e79c9beac8bf60c37d51f490092708be2348d11228ea6ad225db2d884a96b26c5b990e6746ada"]}}}]}
2842000ms th_a witness.cpp:220 block_production_loo ] Not producing block because it isn't my turn
2842001ms th_a witness.cpp:187 schedule_production_ ] now.time_since_epoch().count(): 1442231242001571 next_second: 2015-09-14T11:47:23
2842014ms th_a application.cpp:356 handle_block ] Got block #194679 from network
2842015ms th_a application.cpp:378 handle_block ] Error when pushing block:
10 assert_exception: Assert Exception
item->num < _head->num + MAX_BLOCK_REORDERING:
{}
th_a fork_database.cpp:70 _push_block
{"new_block":{"previous":"0002f876121cf55147d14b4c30a8aaf0877a38eb","timestamp":"2015-09-14T11:47:20","witness":"1.6.48","transaction_merkle_root":"0000000000000000000000000000000000000000","extensions":[],"witness_signature":"1f5b899e8a4e1a406629a6a98d1dc2f463bae6b60517435ffe68825184d0680f240a7ed917a916c5e1475f05191d67536f74f78bf76e5323537f0990eebd0adc21","transactions":[]}}
th_a db_block.cpp:176 _push_block
2843000ms th_a witness.cpp:220 block_production_loo ] Not producing block because it isn't my turn
2843001ms th_a witness.cpp:187 schedule_production_ ] now.time_since_epoch().count(): 1442231243001656 next_second: 2015-09-14T11:47:24
2843787ms th_a main.cpp:169 main ] Exiting from signal 2
2843798ms th_a thread.cpp:115 ~thread ] calling quit() on ntp
2843799ms th_a thread.cpp:160 quit ] destroying boost thread 139689216190208
2843802ms ntp thread.cpp:246 exec ] thread canceled: 9 canceled_exception: Canceled
cancellation reason: [none given]
{"reason":"[none given]"}
ntp thread_d.hpp:463 start_next_fiber
2843836ms th_a thread.cpp:115 ~thread ] calling quit() on p2p
2843836ms th_a thread.cpp:160 quit ] destroying boost thread 139689180436224
2843840ms p2p thread.cpp:246 exec ] thread canceled: 9 canceled_exception: Canceled
cancellation reason: [none given]
{"reason":"[none given]"}
770ms th_a application.cpp:422 handle_transaction ] Got transaction from network
2492315ms th_a application.cpp:383 handle_block ] Got block #191949 from network
2492315ms th_a db_block.cpp:97 push_block ] new_block.block_num(): 191949 new_block.id(): 0002edcda9a6ead041565776fc44dabfbe2e1047
2493260ms th_a application.cpp:683 get_blockchain_synop ] synopsis: ["0002edb0f9f9f46a5131e683d7cdf03630d0b105","0002edbffba5cc2aee85c854c9690d8640766f70","0002edc70bac5a6448cb68f234642c829cac4352","0002edcb696c768b569ade47a988f9ad8bb34344","0002edcd10fc9e1db43fd6ffbc4895169c245915"]
2493714ms th_a application.cpp:683 get_blockchain_synop ] synopsis: ["0002edb0f9f9f46a5131e683d7cdf03630d0b105","0002edbffba5cc2aee85c854c9690d8640766f70","0002edc70bac5a6448cb68f234642c829cac4352","0002edcb696c768b569ade47a988f9ad8bb34344","0002edcd10fc9e1db43fd6ffbc4895169c245915"]
witness_node: /home/user/src/graphene/libraries/chain/db_block.cpp:82: std::vector<fc::ripemd160> graphene::chain::database::get_block_ids_on_fork(graphene::chain::block_id_type) const: Assertion `branches.first.back()->id == branches.second.back()->id' failed.
Program received signal SIGABRT, Aborted.
0x00007ffff6c01cc9 in __GI_raise (sig=sig@entry=6)
at ../nptl/sysdeps/unix/sysv/linux/raise.c:56
56 ../nptl/sysdeps/unix/sysv/linux/raise.c: No such file or directory.
(gdb) bt
#0 0x00007ffff6c01cc9 in __GI_raise (sig=sig@entry=6)
at ../nptl/sysdeps/unix/sysv/linux/raise.c:56
#1 0x00007ffff6c050d8 in __GI_abort () at abort.c:89
#2 0x00007ffff6bfab86 in __assert_fail_base (
fmt=0x7ffff6d4b830 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n",
assertion=assertion@entry=0x2ac0c40 "branches.first.back()->id == branches.second.back()->id",
file=file@entry=0x2ac0be0 "/home/user/src/graphene/libraries/chain/db_block.cpp", line=line@entry=82,
function=function@entry=0x2acb220 <graphene::chain::database::get_block_ids_on_fork(fc::ripemd160) const::__PRETTY_FUNCTION__> "std::vector<fc::ripemd160> graphene::chain::database::get_block_ids_on_fork(graphene::chain::block_id_type) const") at assert.c:92
#3 0x00007ffff6bfac32 in __GI___assert_fail (
assertion=0x2ac0c40 "branches.first.back()->id == branches.second.back()->id", file=0x2ac0be0 "/home/user/src/graphene/libraries/chain/db_block.cpp",
line=82,
function=0x2acb220 <graphene::chain::database::get_block_ids_on_fork(fc::ripemd160) const::__PRETTY_FUNCTION__> "std::vector<fc::ripemd160> graphene::chain::database::get_block_ids_on_fork(graphene::chain::block_id_type) const")
at assert.c:101
#4 0x0000000002302824 in graphene::chain::database::get_block_ids_on_fork(fc::ripemd160) const ()
---Type <return> to continue, or q <return> to quit---
#5 0x0000000001f99e28 in graphene::app::detail::application_impl::get_blockchain_synopsis(fc::ripemd160 const&, unsigned int) ()
#6 0x000000000281686f in graphene::net::detail::statistics_gathering_node_delegate_wrapper::get_blockchain_synopsis(fc::ripemd160 const&, unsigned int)::{lambda()#1}::operator()() const ()
#7 0x0000000002826910 in fc::detail::functor_run<graphene::net::detail::statistics_gathering_node_delegate_wrapper::get_blockchain_synopsis(fc::ripemd160 const&, unsigned int)::{lambda()#1}>::run(void*, fc::detail::functor_run<graphene::net::detail::statistics_gathering_node_delegate_wrapper::get_blockchain_synopsis(fc::ripemd160 const&, unsigned int)::{lambda()#1}>) ()
#8 0x00000000025f2011 in fc::task_base::run_impl() ()
#9 0x00000000025f1fa2 in fc::task_base::run() ()
#10 0x00000000025e69c2 in fc::thread_d::run_next_task() ()
#11 0x00000000025e6e66 in fc::thread_d::process_tasks() ()
#12 0x00000000025e64d9 in fc::thread_d::start_process_tasks(long) ()
#13 0x0000000002943a81 in make_fcontext ()
#14 0x0000000000000000 in ?? ()
(gdb)
this is running on commit 30ae8e4f3433d4ee500f0b98e30f05e1ebe806ea.
> webgui@0.0.1 start /home/james/github/graphene-ui/web
> node server.js
Using DEV options
70.00% 1/1 build modules
events.js:72
throw er; // Unhandled 'error' event
^
Error: listen EADDRINUSE
at errnoException (net.js:901:11)
at Server._listen2 (net.js:1039:14)
at listen (net.js:1061:10)
at net.js:1143:9
at dns.js:72:18
at process._tickCallback (node.js:415:13)
at Function.Module.runMain (module.js:499:11)
at startup (node.js:119:16)
at node.js:902:3
npm ERR! webgui@0.0.1 start: `node server.js`
npm ERR! Exit status 8
npm ERR!
npm ERR! Failed at the webgui@0.0.1 start script.
npm ERR! This is most likely a problem with the webgui package,
npm ERR! not with npm itself.
npm ERR! Tell the author that this fails on your system:
npm ERR! node server.js
npm ERR! You can get their info via:
npm ERR! npm owner ls webgui
npm ERR! There is likely additional logging output above.
npm ERR! System Linux 3.19.0-28-generic
npm ERR! command "/usr/bin/nodejs" "/usr/bin/npm" "start"
npm ERR! cwd /home/james/github/graphene-ui/web
npm ERR! node -v v0.10.25
npm ERR! npm -v 1.4.21
npm ERR! code ELIFECYCLE
npm WARN This failure might be due to the use of legacy binary "node"
npm WARN For further explanations, please read
/usr/share/doc/nodejs/README.Debian
npm ERR!
npm ERR! Additional logging details can be found in:
npm ERR! /home/james/github/graphene-ui/web/npm-debug.log
npm ERR! not ok code 0
james@james-desktop:~/github/graphene-ui$
The compilation from Saturday was running smoothly over the weekend. This morning I found the witness crashed:Code: [Select]973361ms th_a db_with.hpp:82 ~pending_transaction ] The invalid pending transaction caused exception {"code":10,"name":"assert_exception","message":"Assert Exception","stack":[{"context":{"level":"error","file":"db_block.cpp","line":507,"method":"_apply_transaction","hostname":"","thread_name":"th_a","timestamp":"2015-09-13T23:16:13"},"format":"(skip & skip_transaction_dupe_check) || trx_idx.indices().get<by_trx_id>().find(trx_id) == trx_idx.indices().get<by_trx_id>().end(): ","data":{}},{"context":{"level":"warn","file":"db_block.cpp","line":563,"method":"_apply_transaction","hostname":"","thread_name":"th_a","timestamp":"2015-09-13T23:16:13"},"format":"","data":{"trx":{"ref_block_num":57237,"ref_block_prefix":141629248,"expiration":"2015-09-13T23:16:57","operations":[[0,{"fee":{"amount":2000000,"asset_id":"1.3.0"},"from":"1.2.63354","to":"1.2.17263","amount":{"amount":1,"asset_id":"1.3.0"},"extensions":[]}]],"extensions":[],"signatures":["206f76beb28d9a139bbb8f061d787a83aa3d0565040be653ae2330a802eafbfe0f780f4ec86c269c278eb81d6458d53ea8c0d7bfe78eadae56e124a529d6ffa6bc"]}}},{"context":{"level":"warn","file":"db_block.cpp","line":205,"method":"push_transaction","hostname":"","thread_name":"th_a","timestamp":"2015-09-13T23:16:13"},"format":"","data":{"trx":{"ref_block_num":57237,"ref_block_prefix":141629248,"expiration":"2015-09-13T23:16:57","operations":[[0,{"fee":{"amount":2000000,"asset_id":"1.3.0"},"from":"1.2.63354","to":"1.2.17263","amount":{"amount":1,"asset_id":"1.3.0"},"extensions":[]}]],"extensions":[],"signatures":["206f76beb28d9a139bbb8f061d787a83aa3d0565040be653ae2330a802eafbfe0f780f4ec86c269c278eb81d6458d53ea8c0d7bfe78eadae56e124a529d6ffa6bc"]}}}]}
973391ms th_a application.cpp:421 handle_transaction ] Got transaction from network
973392ms th_a application.cpp:421 handle_transaction ] Got transaction from network
973493ms th_a application.cpp:421 handle_transaction ] Got transaction from network
973558ms th_a application.cpp:421 handle_transaction ] Got transaction from network
973682ms th_a application.cpp:421 handle_transaction ] Got transaction from network
973756ms th_a application.cpp:421 handle_transaction ] Got transaction from network
973823ms th_a application.cpp:421 handle_transaction ] Got transaction from network
witness_node: /home/calabiyau/graphene/libraries/fc/src/thread/thread_d.hpp:370: bool fc::thread_d::start_next_fiber(bool): Assertion `std::current_exception() == std::exception_ptr()' failed.
Aborted (core dumped)
Woke up to this tooCode: [Select]3169758ms th_a db_with.hpp:82 ~pending_transaction ] The invalid pending transaction caused exception {"code":10,"name":"assert_exception","message":"Assert Exception","stack":[{"context":{"level":"error","file":"db_block.cpp","line":507,"method":"_apply_transaction","hostname":"","thread_name":"th_a","timestamp":"2015-09-14T05:52:49"},"format":"(skip & skip_transaction_dupe_check) || trx_idx.indices().get<by_trx_id>().find(trx_id) == trx_idx.indices().get<by_trx_id>().end(): ","data":{}},{"context":{"level":"warn","file":"db_block.cpp","line":563,"method":"_apply_transaction","hostname":"","thread_name":"th_a","timestamp":"2015-09-14T05:52:49"},"format":"","data":{"trx":{"ref_block_num":60494,"ref_block_prefix":2734476150,"expiration":"2015-09-14T05:56:32","operations":[[0,{"fee":{"amount":2000000,"asset_id":"1.3.0"},"from":"1.2.63354","to":"1.2.17263","amount":{"amount":1,"asset_id":"1.3.0"},"extensions":[]}]],"extensions":[],"signatures":["205c4049776969e7b3fa9ad3ee401a46b757a6b0821dcf2c0c2f4e79c9beac8bf60c37d51f490092708be2348d11228ea6ad225db2d884a96b26c5b990e6746ada"]}}},{"context":{"level":"warn","file":"db_block.cpp","line":205,"method":"push_transaction","hostname":"","thread_name":"th_a","timestamp":"2015-09-14T05:52:49"},"format":"","data":{"trx":{"ref_block_num":60494,"ref_block_prefix":2734476150,"expiration":"2015-09-14T05:56:32","operations":[[0,{"fee":{"amount":2000000,"asset_id":"1.3.0"},"from":"1.2.63354","to":"1.2.17263","amount":{"amount":1,"asset_id":"1.3.0"},"extensions":[]}]],"extensions":[],"signatures":["205c4049776969e7b3fa9ad3ee401a46b757a6b0821dcf2c0c2f4e79c9beac8bf60c37d51f490092708be2348d11228ea6ad225db2d884a96b26c5b990e6746ada"]}}}]}
This problem (if not just a warning) seems to have disappeared now.
3498879ms th_a application.cpp:422 handle_transaction ] Got transaction from network
3498880ms th_a application.cpp:422 handle_transaction ] Got transaction from network
witness_node: /home/delegate/graphene/libraries/fc/src/thread/thread_d.hpp:370: bool fc::thread_d::start_next_fiber(bool): Assertion `std::current_exception() == std::exception_ptr()' failed.
Aborted
My stats.bitshares.eu node died withCode: [Select]3498879ms th_a application.cpp:422 handle_transaction ] Got transaction from network
3498880ms th_a application.cpp:422 handle_transaction ] Got transaction from network
witness_node: /home/delegate/graphene/libraries/fc/src/thread/thread_d.hpp:370: bool fc::thread_d::start_next_fiber(bool): Assertion `std::current_exception() == std::exception_ptr()' failed.
Aborted
My witness producing node is no longer losing sync during stress tests, but it does stop producing blocks. I can't find anything in logs that sheds any light on it.
You already have a server running on port 8080, either something else or another copy of the gui.
Also got this when I re-cloned graphene-ui. If there's another thread for the UI I'll move this post there.Code: [Select]> webgui@0.0.1 start /home/james/github/graphene-ui/web
> node server.js
Using DEV options
70.00% 1/1 build modules
events.js:72
throw er; // Unhandled 'error' event
^
Error: listen EADDRINUSE
at errnoException (net.js:901:11)
at Server._listen2 (net.js:1039:14)
at listen (net.js:1061:10)
at net.js:1143:9
at dns.js:72:18
at process._tickCallback (node.js:415:13)
at Function.Module.runMain (module.js:499:11)
at startup (node.js:119:16)
at node.js:902:3
npm ERR! webgui@0.0.1 start: `node server.js`
npm ERR! Exit status 8
npm ERR!
npm ERR! Failed at the webgui@0.0.1 start script.
npm ERR! This is most likely a problem with the webgui package,
npm ERR! not with npm itself.
npm ERR! Tell the author that this fails on your system:
npm ERR! node server.js
npm ERR! You can get their info via:
npm ERR! npm owner ls webgui
npm ERR! There is likely additional logging output above.
npm ERR! System Linux 3.19.0-28-generic
npm ERR! command "/usr/bin/nodejs" "/usr/bin/npm" "start"
npm ERR! cwd /home/james/github/graphene-ui/web
npm ERR! node -v v0.10.25
npm ERR! npm -v 1.4.21
npm ERR! code ELIFECYCLE
npm WARN This failure might be due to the use of legacy binary "node"
npm WARN For further explanations, please read
/usr/share/doc/nodejs/README.Debian
npm ERR!
npm ERR! Additional logging details can be found in:
npm ERR! /home/james/github/graphene-ui/web/npm-debug.log
npm ERR! not ok code 0
james@james-desktop:~/github/graphene-ui$
My witness producing node is no longer losing sync during stress tests, but it does stop producing blocks. I can't find anything in logs that sheds any light on it.
How are you determining that it stopped producing blocks? Perhaps the absence of logs is a clue :)
My witness producing node is no longer losing sync during stress tests, but it does stop producing blocks. I can't find anything in logs that sheds any light on it.
How are you determining that it stopped producing blocks? Perhaps the absence of logs is a clue :)
I am just watching the witness vesting pay 1.13.163. The witness is still voted in. Info shows a recent block time. At this point, I am rebuilding, and ill verify the issue remains. Then I was thinking I would run it on a more powerful machine and see if that helped. I'll also keep digging in the logs to see if I can find anything.
Is there a way to see the scheduled witness order? Some way to tell which witness was scheduled when a block isn't produced?
Yes.My witness producing node is no longer losing sync during stress tests, but it does stop producing blocks. I can't find anything in logs that sheds any light on it.
How are you determining that it stopped producing blocks? Perhaps the absence of logs is a clue :)
I am just watching the witness vesting pay 1.13.163. The witness is still voted in. Info shows a recent block time. At this point, I am rebuilding, and ill verify the issue remains. Then I was thinking I would run it on a more powerful machine and see if that helped. I'll also keep digging in the logs to see if I can find anything.
Is there a way to see the scheduled witness order? Some way to tell which witness was scheduled when a block isn't produced?
Does restarting the witness fix the production issue?
The compilation from Saturday was running smoothly over the weekend. This morning I found the witness crashed:Code: [Select]973361ms th_a db_with.hpp:82 ~pending_transaction ] The invalid pending transaction caused exception {"code":10,"name":"assert_exception","message":"Assert Exception","stack":[{"context":{"level":"error","file":"db_block.cpp","line":507,"method":"_apply_transaction","hostname":"","thread_name":"th_a","timestamp":"2015-09-13T23:16:13"},"format":"(skip & skip_transaction_dupe_check) || trx_idx.indices().get<by_trx_id>().find(trx_id) == trx_idx.indices().get<by_trx_id>().end(): ","data":{}},{"context":{"level":"warn","file":"db_block.cpp","line":563,"method":"_apply_transaction","hostname":"","thread_name":"th_a","timestamp":"2015-09-13T23:16:13"},"format":"","data":{"trx":{"ref_block_num":57237,"ref_block_prefix":141629248,"expiration":"2015-09-13T23:16:57","operations":[[0,{"fee":{"amount":2000000,"asset_id":"1.3.0"},"from":"1.2.63354","to":"1.2.17263","amount":{"amount":1,"asset_id":"1.3.0"},"extensions":[]}]],"extensions":[],"signatures":["206f76beb28d9a139bbb8f061d787a83aa3d0565040be653ae2330a802eafbfe0f780f4ec86c269c278eb81d6458d53ea8c0d7bfe78eadae56e124a529d6ffa6bc"]}}},{"context":{"level":"warn","file":"db_block.cpp","line":205,"method":"push_transaction","hostname":"","thread_name":"th_a","timestamp":"2015-09-13T23:16:13"},"format":"","data":{"trx":{"ref_block_num":57237,"ref_block_prefix":141629248,"expiration":"2015-09-13T23:16:57","operations":[[0,{"fee":{"amount":2000000,"asset_id":"1.3.0"},"from":"1.2.63354","to":"1.2.17263","amount":{"amount":1,"asset_id":"1.3.0"},"extensions":[]}]],"extensions":[],"signatures":["206f76beb28d9a139bbb8f061d787a83aa3d0565040be653ae2330a802eafbfe0f780f4ec86c269c278eb81d6458d53ea8c0d7bfe78eadae56e124a529d6ffa6bc"]}}}]}
973391ms th_a application.cpp:421 handle_transaction ] Got transaction from network
973392ms th_a application.cpp:421 handle_transaction ] Got transaction from network
973493ms th_a application.cpp:421 handle_transaction ] Got transaction from network
973558ms th_a application.cpp:421 handle_transaction ] Got transaction from network
973682ms th_a application.cpp:421 handle_transaction ] Got transaction from network
973756ms th_a application.cpp:421 handle_transaction ] Got transaction from network
973823ms th_a application.cpp:421 handle_transaction ] Got transaction from network
witness_node: /home/calabiyau/graphene/libraries/fc/src/thread/thread_d.hpp:370: bool fc::thread_d::start_next_fiber(bool): Assertion `std::current_exception() == std::exception_ptr()' failed.
Aborted (core dumped)
Woke up to this tooCode: [Select]3169758ms th_a db_with.hpp:82 ~pending_transaction ] The invalid pending transaction caused exception {"code":10,"name":"assert_exception","message":"Assert Exception","stack":[{"context":{"level":"error","file":"db_block.cpp","line":507,"method":"_apply_transaction","hostname":"","thread_name":"th_a","timestamp":"2015-09-14T05:52:49"},"format":"(skip & skip_transaction_dupe_check) || trx_idx.indices().get<by_trx_id>().find(trx_id) == trx_idx.indices().get<by_trx_id>().end(): ","data":{}},{"context":{"level":"warn","file":"db_block.cpp","line":563,"method":"_apply_transaction","hostname":"","thread_name":"th_a","timestamp":"2015-09-14T05:52:49"},"format":"","data":{"trx":{"ref_block_num":60494,"ref_block_prefix":2734476150,"expiration":"2015-09-14T05:56:32","operations":[[0,{"fee":{"amount":2000000,"asset_id":"1.3.0"},"from":"1.2.63354","to":"1.2.17263","amount":{"amount":1,"asset_id":"1.3.0"},"extensions":[]}]],"extensions":[],"signatures":["205c4049776969e7b3fa9ad3ee401a46b757a6b0821dcf2c0c2f4e79c9beac8bf60c37d51f490092708be2348d11228ea6ad225db2d884a96b26c5b990e6746ada"]}}},{"context":{"level":"warn","file":"db_block.cpp","line":205,"method":"push_transaction","hostname":"","thread_name":"th_a","timestamp":"2015-09-14T05:52:49"},"format":"","data":{"trx":{"ref_block_num":60494,"ref_block_prefix":2734476150,"expiration":"2015-09-14T05:56:32","operations":[[0,{"fee":{"amount":2000000,"asset_id":"1.3.0"},"from":"1.2.63354","to":"1.2.17263","amount":{"amount":1,"asset_id":"1.3.0"},"extensions":[]}]],"extensions":[],"signatures":["205c4049776969e7b3fa9ad3ee401a46b757a6b0821dcf2c0c2f4e79c9beac8bf60c37d51f490092708be2348d11228ea6ad225db2d884a96b26c5b990e6746ada"]}}}]}
This problem (if not just a warning) seems to have disappeared now.
I think I have identified the cause of this problem. Can you confirm that it occurred while you were using the RPC interface?
It seems the stress test has genereted a big block (> 100tx/sec):+5%
blockNum IdWitness nameWitness nTx
196454 '1.6.3356' 'mr.agsexplorer' 507
1051908ms th_a db_block.cpp:97 push_block ] new_block.block_num(): 197873 new_block.id(): 000304f12cf6d6cf7636a0cd287f8ba8367c33a8
witness_node: /home/user/src/graphene/libraries/chain/db_block.cpp:82: std::vector<fc::ripemd160> graphene::chain::database::get_block_ids_on_fork(graphene::chain::block_id_type) const: Assertion `branches.first.back()->id == branches.second.back()->id' failed.
Aborted (core dumped)
You already have a server running on port 8080, either something else or another copy of the gui.
latest master crashed on me during stress test. I wasn't running in gdbCode: [Select]1051908ms th_a db_block.cpp:97 push_block ] new_block.block_num(): 197873 new_block.id(): 000304f12cf6d6cf7636a0cd287f8ba8367c33a8
witness_node: /home/user/src/graphene/libraries/chain/db_block.cpp:82: std::vector<fc::ripemd160> graphene::chain::database::get_block_ids_on_fork(graphene::chain::block_id_type) const: Assertion `branches.first.back()->id == branches.second.back()->id' failed.
Aborted (core dumped)
block 199762 generated by init68
block 199762 contained 9 transactions
block 199763 generated by init14
block 199763 contained 20 transactions
block 199764 generated by init35
block 199764 contained 11 transactions
block 199765 generated by init36
block 199765 contained 7 transactions
block 199766 generated by init19
block 199766 contained 10 transactions
block 199767 generated by init10
block 199767 contained 10 transactions
block 199768 generated by init80
block 199768 contained 9 transactions
block 199769 generated by init51
block 199769 contained 4 transactions
block 199770 generated by init11
block 199770 contained 0 transactions
block 199771 generated by init59
block 199771 contained 0 transactions
block 199772 generated by init67
block 199772 contained 4 transactions
block 199773 generated by init75
block 199773 contained 0 transactions
block 199774 generated by init4
block 199774 contained 0 transactions
block 199775 generated by init20
block 199775 contained 0 transactions
very little of my spam is making it into blocks. I wrote a simple python script to tie into xerocs python-graphenelib which writes block producer and # of transactions to the console. While spamming there is quite a bit of variability.Code: [Select]block 199762 generated by init68
block 199762 contained 9 transactions
block 199763 generated by init14
block 199763 contained 20 transactions
block 199764 generated by init35
block 199764 contained 11 transactions
block 199765 generated by init36
block 199765 contained 7 transactions
block 199766 generated by init19
block 199766 contained 10 transactions
block 199767 generated by init10
block 199767 contained 10 transactions
block 199768 generated by init80
block 199768 contained 9 transactions
block 199769 generated by init51
block 199769 contained 4 transactions
block 199770 generated by init11
block 199770 contained 0 transactions
block 199771 generated by init59
block 199771 contained 0 transactions
block 199772 generated by init67
block 199772 contained 4 transactions
block 199773 generated by init75
block 199773 contained 0 transactions
block 199774 generated by init4
block 199774 contained 0 transactions
block 199775 generated by init20
block 199775 contained 0 transactions
While you were spamming none of my transactions went through. Also can we spam the network from the js console in the web browser? I couldn't find any functions for making transactions.
A javascript CLI environment is also available in the ./cli folder. Some example commands:
```
// Transaction template:
$g.wallet.template("account_upgrade")
// Create a transaction:
var tr = $g.wallet.new_transaction()
tr.add_type_operation("account_upgrade", {"account_to_upgrade":"1.2.15","upgrade_to_lifetime_member":true})
$g.wallet.sign_and_broadcast(tr)
745500ms th_a application.cpp:383 handle_block ] Got block #199649 from network
745500ms th_a db_block.cpp:98 push_block ] new_block.block_num(): 199649 new_block.id(): 00030be11aae3a4891869a7ad9aab215e93aee36747089ms th_a application.cpp:683 get_blockchain_synop ] synopsis: ["00030b6c048c2452ac552e476f913329d818074a","00030ba7eec13a91d97908a50549d874681ea35a","00030bc587cc15cfdf07f10a72804724cb51217d","00030bd4128e9547c6771b80e5ffff1fdeec37a4","00030bdba291f319d7f0f62ec67394ef613ca8a5","00030bdf303a78916851613360a845e22e0041d8","00030be11aae3a4891869a7ad9aab215e93aee36"]
749560ms th_a application.cpp:683 get_blockchain_synop ] synopsis: ["00030b6c048c2452ac552e476f913329d818074a","00030ba7eec13a91d97908a50549d874681ea35a","00030bc587cc15cfdf07f10a72804724cb51217d","00030bd4128e9547c6771b80e5ffff1fdeec37a4","00030bdba291f319d7f0f62ec67394ef613ca8a5","00030bdf303a78916851613360a845e22e0041d8","00030be11aae3a4891869a7ad9aab215e93aee36"]
752624ms th_a application.cpp:683 get_blockchain_synop ] synopsis: ["00030b6c048c2452ac552e476f913329d818074a","00030ba7eec13a91d97908a50549d874681ea35a","00030bc587cc15cfdf07f10a72804724cb51217d","00030bd4128e9547c6771b80e5ffff1fdeec37a4","00030bdba291f319d7f0f62ec67394ef613ca8a5","00030bdf303a78916851613360a845e22e0041d8","00030be11aae3a4891869a7ad9aab215e93aee36"]
752881ms th_a application.cpp:683 get_blockchain_synop ] synopsis: ["00030b6c048c2452ac552e476f913329d818074a","00030ba7eec13a91d97908a50549d874681ea35a","00030bc587cc15cfdf07f10a72804724cb51217d","00030bd4128e9547c6771b80e5ffff1fdeec37a4","00030bdba291f319d7f0f62ec67394ef613ca8a5","00030bdf303a78916851613360a845e22e0041d8","00030be11aae3a4891869a7ad9aab215e93aee36"]
witness_node: /app/bts/graphene-test2b.7/libraries/app/application.cpp:624: virtual std::vector<fc::ripemd160> graphene::app::detail::application_impl::get_blockchain_synopsis(const item_hash_t&, uint32_t): Assertion `fork_history.back() == reference_point' failed.
Program received signal SIGABRT, Aborted.
0x00007ffff6c01cc9 in __GI_raise (sig=sig@entry=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:56
56 ../nptl/sysdeps/unix/sysv/linux/raise.c: No such file or directory.
(gdb) bt
#0 0x00007ffff6c01cc9 in __GI_raise (sig=sig@entry=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:56
#1 0x00007ffff6c050d8 in __GI_abort () at abort.c:89
#2 0x00007ffff6bfab86 in __assert_fail_base (fmt=0x7ffff6d4b830 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n",
assertion=assertion@entry=0x2aa0530 "fork_history.back() == reference_point",
file=file@entry=0x2a9fd58 "/app/bts/graphene-test2b.7/libraries/app/application.cpp", line=line@entry=624,
function=function@entry=0x2aa5f00 <graphene::app::detail::application_impl::get_blockchain_synopsis(fc::ripemd160 const&, unsigned int)::__PRETTY_FUNCTION__> "virtual std::vector<fc::ripemd160> graphene::app::detail::application_impl::get_blockchain_synopsis(const item_hash_t&, uint32_t)") at assert.c:92
#3 0x00007ffff6bfac32 in __GI___assert_fail (assertion=0x2aa0530 "fork_history.back() == reference_point",
file=0x2a9fd58 "/app/bts/graphene-test2b.7/libraries/app/application.cpp", line=624,
function=0x2aa5f00 <graphene::app::detail::application_impl::get_blockchain_synopsis(fc::ripemd160 const&, unsigned int)::__PRETTY_FUNCTION__> "virtual std::vector<fc::ripemd160> graphene::app::detail::application_impl::get_blockchain_synopsis(const item_hash_t&, uint32_t)") at assert.c:101
#4 0x0000000001fd088e in graphene::app::detail::application_impl::get_blockchain_synopsis (this=0x3318140, reference_point=...,
number_of_blocks_after_reference_point=0) at /app/bts/graphene-test2b.7/libraries/app/application.cpp:624
#5 0x000000000284d3a3 in graphene::net::detail::statistics_gathering_node_delegate_wrapper::__lambda59::operator() (
__closure=0x7fffe411d868) at /app/bts/graphene-test2b.7/libraries/net/node.cpp:5296
#6 0x000000000285d444 in fc::detail::functor_run<graphene::net::detail::statistics_gathering_node_delegate_wrapper::get_blockchain_synopsis(const item_hash_t&, uint32_t)::__lambda59>::run(void *, void *) (functor=0x7fffe411d868, prom=0x7fffe411d950)
at /app/bts/graphene-test2b.7/libraries/fc/include/fc/thread/task.hpp:77
#7 0x0000000002628a87 in fc::task_base::run_impl (this=0x7fffe411d888)
at /app/bts/graphene-test2b.7/libraries/fc/src/thread/task.cpp:43
#8 0x0000000002628a18 in fc::task_base::run (this=0x7fffe411d888) at /app/bts/graphene-test2b.7/libraries/fc/src/thread/task.cpp:32
#9 0x000000000261d438 in fc::thread_d::run_next_task (this=0x334c040)
at /app/bts/graphene-test2b.7/libraries/fc/src/thread/thread_d.hpp:498
#10 0x000000000261d8dc in fc::thread_d::process_tasks (this=0x334c040)
at /app/bts/graphene-test2b.7/libraries/fc/src/thread/thread_d.hpp:547
#11 0x000000000261cf4f in fc::thread_d::start_process_tasks (my=53788736)
at /app/bts/graphene-test2b.7/libraries/fc/src/thread/thread_d.hpp:475
#12 0x0000000002975b51 in make_fcontext () at libs/context/src/asm/make_x86_64_sysv_elf_gas.S:64
#13 0x0000000000000000 in ?? ()
2015-09-14T22:02:15 th_a:Witness Block Production push_block ] new_block.block_num(): 199572 new_block.id(): 00030b940fbad34
0e39709083e5786d5e703bb33 db_block.cpp:98
2015-09-14T22:02:15 th_a:Witness Block Production _push_block ] Failed to push new block:
10 assert_exception: Assert Exception
_pending_block.timestamp <= trx.expiration:
{"pending.timestamp":"2015-09-14T22:02:15","trx.exp":"2015-09-14T22:02:10"}
th_a db_block.cpp:534 _apply_transaction
{"trx":{"ref_block_num":2945,"ref_block_prefix":4216103856,"expiration":"2015-09-14T22:02:10","operations":[[0,{"fee":{"amount":20
00000,"asset_id":"1.3.0"},"from":"1.2.63354","to":"1.2.38483","amount":{"amount":100000,"asset_id":"1.3.0"},"extensions":[]}]],"extens
ions":[],"signatures":["2014dd59f97199b1aaf2785e458284f5762cfc0a1f43bd560fb3058ea5dbd2d0291cc13da25a050f251ee300c573da2347e91fe2c30bc0
393e409eb067b871e4ac"]}}
th_a db_block.cpp:564 _apply_transaction
{"next_block.block_num()":199572}
th_a db_block.cpp:469 _apply_block db_block.cpp:181
info
{
"head_block_num": 199572,
"head_block_id": "00030b94089bb7f0949bb63189a506fdf13b6d54",
...
}
1315751ms th_a db_block.cpp:98 push_block ] new_block.block_num(): 198066 new_block.id(): 000305b27cbf39adca86c5d4a086bc3febc56bc8
1315752ms th_a db_block.cpp:98 push_block ] new_block.block_num(): 198067 new_block.id(): 000305b3dc7d7e6d6ec1de714675167bb68d6844
1315753ms th_a db_block.cpp:98 push_block ] new_block.block_num(): 198068 new_block.id(): 000305b4cedbe47143d0ff393eef37113933ff6c
1315754ms th_a db_block.cpp:98 push_block ] new_block.block_num(): 198069 new_block.id(): 000305b5b4a47445460fbe4f570f6fcdd5640085
1315755ms th_a db_block.cpp:98 push_block ] new_block.block_num(): 198070 new_block.id(): 000305b6b7a20c14f5e1827af055a7bc4da82eb0
1315756ms th_a db_block.cpp:98 push_block ] new_block.block_num(): 198071 new_block.id(): 000305b748128bac176f75273a0023e892c52928
witness_node: /home/calabiyau/graphene/libraries/app/application.cpp:624: virtual std::vector<fc::ripemd160> graphene::app::detail::application_impl::get_blockchain_synopsis(const item_hash_t&, uint32_t): Assertion `fork_history.back() == reference_point' failed.
Aborted (core dumped)
2735099ms th_a db_block.cpp:98 push_block ] new_block.block_num(): 198068 new_block.id(): 000305b4cedbe47143d0ff393eef37113933ff6c
2735100ms th_a db_block.cpp:98 push_block ] new_block.block_num(): 198069 new_block.id(): 000305b5b4a47445460fbe4f570f6fcdd5640085
2735100ms th_a db_block.cpp:98 push_block ] new_block.block_num(): 198070 new_block.id(): 000305b6b7a20c14f5e1827af055a7bc4da82eb0
2735101ms th_a db_block.cpp:98 push_block ] new_block.block_num(): 198071 new_block.id(): 000305b748128bac176f75273a0023e892c52928
2735101ms th_a db_block.cpp:98 push_block ] new_block.block_num(): 198072 new_block.id(): 000305b84a7b19f36c111e13ab8f4617f111e812
2735102ms th_a db_block.cpp:98 push_block ] new_block.block_num(): 198073 new_block.id(): 000305b904de6654a5450b597ecb0d2ec6217599
2735102ms th_a db_block.cpp:98 push_block ] new_block.block_num(): 198074 new_block.id(): 000305ba363cb379bdde904800269ed0e60a88b4
2735103ms th_a db_block.cpp:98 push_block ] new_block.block_num(): 198075 new_block.id(): 000305bb716d1936a8294d55215c5290f3254174
2735103ms th_a db_block.cpp:98 push_block ] new_block.block_num(): 198076 new_block.id(): 000305bc3d4eba5a0e48b33c26b846070aa90161
witness_node: /home/admin/.BitShares2_build/libraries/app/application.cpp:624: virtual std::vector<fc::ripemd160> graphene::app::detail::application_impl::get_blockchain_synopsis(const item_hash_t&, uint32_t): Assertion `fork_history.back() == reference_point' failed.
Program received signal SIGABRT, Aborted.
0x00007ffff6516107 in raise () from /lib/x86_64-linux-gnu/libc.so.6
(gdb) bt
#0 0x00007ffff6516107 in raise () from /lib/x86_64-linux-gnu/libc.so.6
#1 0x00007ffff65174e8 in abort () from /lib/x86_64-linux-gnu/libc.so.6
#2 0x00007ffff650f226 in ?? () from /lib/x86_64-linux-gnu/libc.so.6
#3 0x00007ffff650f2d2 in __assert_fail () from /lib/x86_64-linux-gnu/libc.so.6
#4 0x00000000020bd3c5 in graphene::app::detail::application_impl::get_blockchain_synopsis (this=0x356b030, reference_point=..., number_of_blocks_after_reference_point=0) at /home/admin/.BitShares2_build/libraries/app/application.cpp:624
#5 0x000000000297a705 in graphene::net::detail::statistics_gathering_node_delegate_wrapper::<lambda()>::operator()(void) const (__closure=0x7fffe0649698) at /home/admin/.BitShares2_build/libraries/net/node.cpp:5296
#6 0x000000000298a97b in fc::detail::functor_run<graphene::net::detail::statistics_gathering_node_delegate_wrapper::get_blockchain_synopsis(const item_hash_t&, uint32_t)::<lambda()> >::run(void *, void *) (functor=0x7fffe0649698, prom=0x7fffe0649780)
at /home/admin/.BitShares2_build/libraries/fc/include/fc/thread/task.hpp:77
#7 0x00000000027519cd in fc::task_base::run_impl (this=0x7fffe06496b8) at /home/admin/.BitShares2_build/libraries/fc/src/thread/task.cpp:43
#8 0x000000000275195c in fc::task_base::run (this=0x7fffe06496b8) at /home/admin/.BitShares2_build/libraries/fc/src/thread/task.cpp:32
#9 0x00000000027459ba in fc::thread_d::run_next_task (this=0x359eab0) at /home/admin/.BitShares2_build/libraries/fc/src/thread/thread_d.hpp:498
#10 0x0000000002745e84 in fc::thread_d::process_tasks (this=0x359eab0) at /home/admin/.BitShares2_build/libraries/fc/src/thread/thread_d.hpp:547
#11 0x00000000027454ae in fc::thread_d::start_process_tasks (my=56224432) at /home/admin/.BitShares2_build/libraries/fc/src/thread/thread_d.hpp:475
#12 0x0000000002abf931 in make_fcontext ()
#13 0x00007fffe8000020 in ?? ()
#14 0x0000000000000000 in ?? ()
The compilation from Saturday was running smoothly over the weekend. This morning I found the witness crashed:Code: [Select]973361ms th_a db_with.hpp:82 ~pending_transaction ] The invalid pending transaction caused exception {"code":10,"name":"assert_exception","message":"Assert Exception","stack":[{"context":{"level":"error","file":"db_block.cpp","line":507,"method":"_apply_transaction","hostname":"","thread_name":"th_a","timestamp":"2015-09-13T23:16:13"},"format":"(skip & skip_transaction_dupe_check) || trx_idx.indices().get<by_trx_id>().find(trx_id) == trx_idx.indices().get<by_trx_id>().end(): ","data":{}},{"context":{"level":"warn","file":"db_block.cpp","line":563,"method":"_apply_transaction","hostname":"","thread_name":"th_a","timestamp":"2015-09-13T23:16:13"},"format":"","data":{"trx":{"ref_block_num":57237,"ref_block_prefix":141629248,"expiration":"2015-09-13T23:16:57","operations":[[0,{"fee":{"amount":2000000,"asset_id":"1.3.0"},"from":"1.2.63354","to":"1.2.17263","amount":{"amount":1,"asset_id":"1.3.0"},"extensions":[]}]],"extensions":[],"signatures":["206f76beb28d9a139bbb8f061d787a83aa3d0565040be653ae2330a802eafbfe0f780f4ec86c269c278eb81d6458d53ea8c0d7bfe78eadae56e124a529d6ffa6bc"]}}},{"context":{"level":"warn","file":"db_block.cpp","line":205,"method":"push_transaction","hostname":"","thread_name":"th_a","timestamp":"2015-09-13T23:16:13"},"format":"","data":{"trx":{"ref_block_num":57237,"ref_block_prefix":141629248,"expiration":"2015-09-13T23:16:57","operations":[[0,{"fee":{"amount":2000000,"asset_id":"1.3.0"},"from":"1.2.63354","to":"1.2.17263","amount":{"amount":1,"asset_id":"1.3.0"},"extensions":[]}]],"extensions":[],"signatures":["206f76beb28d9a139bbb8f061d787a83aa3d0565040be653ae2330a802eafbfe0f780f4ec86c269c278eb81d6458d53ea8c0d7bfe78eadae56e124a529d6ffa6bc"]}}}]}
973391ms th_a application.cpp:421 handle_transaction ] Got transaction from network
973392ms th_a application.cpp:421 handle_transaction ] Got transaction from network
973493ms th_a application.cpp:421 handle_transaction ] Got transaction from network
973558ms th_a application.cpp:421 handle_transaction ] Got transaction from network
973682ms th_a application.cpp:421 handle_transaction ] Got transaction from network
973756ms th_a application.cpp:421 handle_transaction ] Got transaction from network
973823ms th_a application.cpp:421 handle_transaction ] Got transaction from network
witness_node: /home/calabiyau/graphene/libraries/fc/src/thread/thread_d.hpp:370: bool fc::thread_d::start_next_fiber(bool): Assertion `std::current_exception() == std::exception_ptr()' failed.
Aborted (core dumped)
Woke up to this tooCode: [Select]3169758ms th_a db_with.hpp:82 ~pending_transaction ] The invalid pending transaction caused exception {"code":10,"name":"assert_exception","message":"Assert Exception","stack":[{"context":{"level":"error","file":"db_block.cpp","line":507,"method":"_apply_transaction","hostname":"","thread_name":"th_a","timestamp":"2015-09-14T05:52:49"},"format":"(skip & skip_transaction_dupe_check) || trx_idx.indices().get<by_trx_id>().find(trx_id) == trx_idx.indices().get<by_trx_id>().end(): ","data":{}},{"context":{"level":"warn","file":"db_block.cpp","line":563,"method":"_apply_transaction","hostname":"","thread_name":"th_a","timestamp":"2015-09-14T05:52:49"},"format":"","data":{"trx":{"ref_block_num":60494,"ref_block_prefix":2734476150,"expiration":"2015-09-14T05:56:32","operations":[[0,{"fee":{"amount":2000000,"asset_id":"1.3.0"},"from":"1.2.63354","to":"1.2.17263","amount":{"amount":1,"asset_id":"1.3.0"},"extensions":[]}]],"extensions":[],"signatures":["205c4049776969e7b3fa9ad3ee401a46b757a6b0821dcf2c0c2f4e79c9beac8bf60c37d51f490092708be2348d11228ea6ad225db2d884a96b26c5b990e6746ada"]}}},{"context":{"level":"warn","file":"db_block.cpp","line":205,"method":"push_transaction","hostname":"","thread_name":"th_a","timestamp":"2015-09-14T05:52:49"},"format":"","data":{"trx":{"ref_block_num":60494,"ref_block_prefix":2734476150,"expiration":"2015-09-14T05:56:32","operations":[[0,{"fee":{"amount":2000000,"asset_id":"1.3.0"},"from":"1.2.63354","to":"1.2.17263","amount":{"amount":1,"asset_id":"1.3.0"},"extensions":[]}]],"extensions":[],"signatures":["205c4049776969e7b3fa9ad3ee401a46b757a6b0821dcf2c0c2f4e79c9beac8bf60c37d51f490092708be2348d11228ea6ad225db2d884a96b26c5b990e6746ada"]}}}]}
This problem (if not just a warning) seems to have disappeared now.
I think I have identified the cause of this problem. Can you confirm that it occurred while you were using the RPC interface?
635026ms th_a application.cpp:383 handle_block ] Got block #206473 from network
635027ms th_a db_block.cpp:98 push_block ] new_block.block_num(): 206473 new_block.id(): 00032689071a6235803359fc4b6a6ed6db088c10
635033ms th_a db_with.hpp:80 ~pending_transaction ] Pending transaction became invalid after switching to block 00032689071a6235803359fc4b6a6ed6db088c10
635034ms th_a db_with.hpp:81 ~pending_transaction ] The invalid pending transaction is {"ref_block_num":9862,"ref_block_prefix":2768671,"expiration":"2015-09-15T12:10:51","operations":[[0,{"fee":{"amount":2000000,"asset_id":"1.3.0"},"from":"1.2.28323","to":"1.2.69491","amount":{"amount":10,"asset_id":"1.3.0"},"extensions":[]}]],"extensions":[],"signatures":["204130f6705ded8e60306341c81b97f967283c50e30d48c2296dc03ee61e19f30759452f4f89bc113a696f009144a8f0c29c46812300982819a7e194dd7d4aebd7"],"operation_results":[[0,{}]]}
635036ms th_a db_with.hpp:82 ~pending_transaction ] The invalid pending transaction caused exception {"code":10,"name":"assert_exception","message":"Assert Exception","stack":[{"context":{"level":"error","file":"db_block.cpp","line":508,"method":"_apply_transaction","hostname":"","thread_name":"th_a","timestamp":"2015-09-15T12:10:35"},"format":"(skip & skip_transaction_dupe_check) || trx_idx.indices().get<by_trx_id>().find(trx_id) == trx_idx.indices().get<by_trx_id>().end(): ","data":{}},{"context":{"level":"warn","file":"db_block.cpp","line":564,"method":"_apply_transaction","hostname":"","thread_name":"th_a","timestamp":"2015-09-15T12:10:35"},"format":"","data":{"trx":{"ref_block_num":9862,"ref_block_prefix":2768671,"expiration":"2015-09-15T12:10:51","operations":[[0,{"fee":{"amount":2000000,"asset_id":"1.3.0"},"from":"1.2.28323","to":"1.2.69491","amount":{"amount":10,"asset_id":"1.3.0"},"extensions":[]}]],"extensions":[],"signatures":["204130f6705ded8e60306341c81b97f967283c50e30d48c2296dc03ee61e19f30759452f4f89bc113a696f009144a8f0c29c46812300982819a7e194dd7d4aebd7"]}}},{"context":{"level":"warn","file":"db_block.cpp","line":206,"method":"push_transaction","hostname":"","thread_name":"th_a","timestamp":"2015-09-15T12:10:35"},"format":"","data":{"trx":{"ref_block_num":9862,"ref_block_prefix":2768671,"expiration":"2015-09-15T12:10:51","operations":[[0,{"fee":{"amount":2000000,"asset_id":"1.3.0"},"from":"1.2.28323","to":"1.2.69491","amount":{"amount":10,"asset_id":"1.3.0"},"extensions":[]}]],"extensions":[],"signatures":["204130f6705ded8e60306341c81b97f967283c50e30d48c2296dc03ee61e19f30759452f4f89bc113a696f009144a8f0c29c46812300982819a7e194dd7d4aebd7"]}}}]}
1657120ms th_a db_block.cpp:97 push_block ] new_block.block_num(): 206549 new_block.id(): 000326d56377e74af799db9b758a71c7e9163979
1667123ms th_a application.cpp:383 handle_block ] Got block #206549 from network
1667123ms th_a db_block.cpp:97 push_block ] new_block.block_num(): 206549 new_block.id(): 000326d59c6c160f7174fbfbba5e6b8c63ae1b95
1672747ms th_a application.cpp:383 handle_block ] Got block #206558 from network
1672747ms th_a db_block.cpp:97 push_block ] new_block.block_num(): 206558 new_block.id(): 000326dee3d9a84169d0bdf5f0b46a459a768795
1672870ms th_a db_with.hpp:80 ~pending_transaction ] Pending transaction became invalid after switching to block 000326dee3d9a84169d0bdf5f0b46a459a768795
1672871ms th_a db_with.hpp:81 ~pending_transaction ] The invalid pending transaction is {"ref_block_num":9949,"ref_block_prefix":2962347885,"expiration":"2015-09-15T12:21:00","operations":[[0,{"fee":{"amount":2000000,"asset_id":"1.3.0"},"from":"1.2.28323","to":"1.2.69491","amount":{"amount":100,"asset_id":"1.3.0"},"extensions":[]}]],"extensions":[],"signatures":["2028d818a60cd6ab98b5ad892c3a2e84b44346c8489b6ceadef390e741c9392ffb510f0da09874b52239b76cad57586ad2bcafa4606092bc75314c7093aa39a32d"],"operation_results":[[0,{}]]}
1672871ms th_a db_with.hpp:82 ~pending_transaction ] The invalid pending transaction caused exception {"code":10,"name":"assert_exception","message":"Assert Exception","stack":[{"context":{"level":"error","file":"db_block.cpp","line":533,"method":"_apply_transaction","hostname":"","thread_name":"th_a","timestamp":"2015-09-15T12:27:52"},"format":"_pending_block.timestamp <= trx.expiration: ","data":{"pending.timestamp":"2015-09-15T12:27:55","trx.exp":"2015-09-15T12:21:00"}},{"context":{"level":"warn","file":"db_block.cpp","line":563,"method":"_apply_transaction","hostname":"","thread_name":"th_a","timestamp":"2015-09-15T12:27:52"},"format":"","data":{"trx":{"ref_block_num":9949,"ref_block_prefix":2962347885,"expiration":"2015-09-15T12:21:00","operations":[[0,{"fee":{"amount":2000000,"asset_id":"1.3.0"},"from":"1.2.28323","to":"1.2.69491","amount":{"amount":100,"asset_id":"1.3.0"},"extensions":[]}]],"extensions":[],"signatures":["2028d818a60cd6ab98b5ad892c3a2e84b44346c8489b6ceadef390e741c9392ffb510f0da09874b52239b76cad57586ad2bcafa4606092bc75314c7093aa39a32d"]}}},{"context":{"level":"warn","file":"db_block.cpp","line":205,"method":"push_transaction","hostname":"","thread_name":"th_a","timestamp":"2015-09-15T12:27:52"},"format":"","data":{"trx":{"ref_block_num":9949,"ref_block_prefix":2962347885,"expiration":"2015-09-15T12:21:00","operations":[[0,{"fee":{"amount":2000000,"asset_id":"1.3.0"},"from":"1.2.28323","to":"1.2.69491","amount":{"amount":100,"asset_id":"1.3.0"},"extensions":[]}]],"extensions":[],"signatures":["2028d818a60cd6ab98b5ad892c3a2e84b44346c8489b6ceadef390e741c9392ffb510f0da09874b52239b76cad57586ad2bcafa4606092bc75314c7093aa39a32d"]}}}]}
1672871ms th_a db_with.hpp:80 ~pending_transaction ] Pending transaction became invalid after switching to block 000326dee3d9a84169d0bdf5f0b46a459a768795
1672871ms th_a db_with.hpp:81 ~pending_transaction ] The invalid pending transaction is {"ref_block_num":9949,"ref_block_prefix":2962347885,"expiration":"2015-09-15T12:21:01","operations":[[0,{"fee":{"amount":2000000,"asset_id":"1.3.0"},"from":"1.2.28323","to":"1.2.69491","amount":{"amount":100,"asset_id":"1.3.0"},"extensions":[]}]],"extensions":[],"signatures":["2006c75e648704e5b50bd596c2bb7f5bb34f5799559a1116d82c8d490f773bc611244773c9ea44b4546e72e25842662b97adc1158b5e1428403fede8eba885f22e"],"operation_results":[[0,{}]]}
1672872ms th_a db_with.hpp:82 ~pending_transaction ] The invalid pending transaction caused exception {"code":10,"name":"assert_exception","message":"Assert Exception","stack":[{"context":{"level":"error","file":"db_block.cpp","line":533,"method":"_apply_transaction","hostname":"","thread_name":"th_a","timestamp":"2015-09-15T12:27:52"},"format":"_pending_block.timestamp <= trx.expiration: ","data":{"pending.timestamp":"2015-09-15T12:27:55","trx.exp":"2015-09-15T12:21:01"}},{"context":{"level":"warn","file":"db_block.cpp","line":563,"method":"_apply_transaction","hostname":"","thread_name":"th_a","timestamp":"2015-09-15T12:27:52"},"format":"","data":{"trx":{"ref_block_num":9949,"ref_block_prefix":2962347885,"expiration":"2015-09-15T12:21:01","operations":[[0,{"fee":{"amount":2000000,"asset_id":"1.3.0"},"from":"1.2.28323","to":"1.2.69491","amount":{"amount":100,"asset_id":"1.3.0"},"extensions":[]}]],"extensions":[],"signatures":["2006c75e648704e5b50bd596c2bb7f5bb34f5799559a1116d82c8d490f773bc611244773c9ea44b4546e72e25842662b97adc1158b5e1428403fede8eba885f22e"]}}},{"context":{"level":"warn","file":"db_block.cpp","line":205,"method":"push_transaction","hostname":"","thread_name":"th_a","timestamp":"2015-09-15T12:27:52"},"format":"","data":{"trx":{"ref_block_num":9949,"ref_block_prefix":2962347885,"expiration":"2015-09-15T12:21:01","operations":[[0,{"fee":{"amount":2000000,"asset_id":"1.3.0"},"from":"1.2.28323","to":"1.2.69491","amount":{"amount":100,"asset_id":"1.3.0"},"extensions":[]}]],"extensions":[],"signatures":["2006c75e648704e5b50bd596c2bb7f5bb34f5799559a1116d82c8d490f773bc611244773c9ea44b4546e72e25842662b97adc1158b5e1428403fede8eba885f22e"]}}}]}
1672872ms th_a db_with.hpp:80 ~pending_transaction ] Pending transaction became invalid after switching to block 000326dee3d9a84169d0bdf5f0b46a459a768795
1672872ms th_a db_with.hpp:81 ~pending_transaction ] The invalid pending transaction is {"ref_block_num":9949,"ref_block_prefix":2962347885,"expiration":"2015-09-15T12:21:00","operations":[[0,{"fee":{"amount":2000000,"asset_id":"1.3.0"},"from":"1.2.28323","to":"1.2.69491","amount":{"amount":10,"asset_id":"1.3.0"},"extensions":[]}]],"extensions":[],"signatures":["1f533969fa5a32713259232de97ab741958885d7e5f638d9cd8915cf416357c3c65c54365f2fce8aa89c44893fb30639bc623836ae93b13976a3906699fe581954"],"operation_results":[[0,{}]]}
1672872ms th_a db_with.hpp:82 ~pending_transaction ] The invalid pending transaction caused exception {"code":10,"name":"assert_exception","message":"Assert Exception","stack":[{"context":{"level":"error","file":"db_block.cpp","line":533,"method":"_apply_transaction","hostname":"","thread_name":"th_a","timestamp":"2015-09-15T12:27:52"},"format":"_pending_block.timestamp <= trx.expiration: ","data":{"pending.timestamp":"2015-09-15T12:27:55","trx.exp":"2015-09-15T12:21:00"}},{"context":{"level":"warn","file":"db_block.cpp","line":563,"method":"_apply_transaction","hostname":"","thread_name":"th_a","timestamp":"2015-09-15T12:27:52"},"format":"","data":{"trx":{"ref_block_num":9949,"ref_block_prefix":2962347885,"expiration":"2015-09-15T12:21:00","operations":[[0,{"fee":{"amount":2000000,"asset_id":"1.3.0"},"from":"1.2.28323","to":"1.2.69491","amount":{"amount":10,"asset_id":"1.3.0"},"extensions":[]}]],"extensions":[],"signatures":["1f533969fa5a32713259232de97ab741958885d7e5f638d9cd8915cf416357c3c65c54365f2fce8aa89c44893fb30639bc623836ae93b13976a3906699fe581954"]}}},{"context":{"level":"warn","file":"db_block.cpp","line":205,"method":"push_transaction","hostname":"","thread_name":"th_a","timestamp":"2015-09-15T12:27:52"},"format":"","data":{"trx":{"ref_block_num":9949,"ref_block_prefix":2962347885,"expiration":"2015-09-15T12:21:00","operations":[[0,{"fee":{"amount":2000000,"asset_id":"1.3.0"},"from":"1.2.28323","to":"1.2.69491","amount":{"amount":10,"asset_id":"1.3.0"},"extensions":[]}]],"extensions":[],"signatures":["1f533969fa5a32713259232de97ab741958885d7e5f638d9cd8915cf416357c3c65c54365f2fce8aa89c44893fb30639bc623836ae93b13976a3906699fe581954"]}}}]}
1732104ms th_a application.cpp:383 handle_block ] Got block #206549 from network
1732104ms th_a db_block.cpp:97 push_block ] new_block.block_num(): 206549 new_block.id(): 000326d54047d70832f0b643458cad10f121f661
1737105ms th_a application.cpp:383 handle_block ] Got block #206549 from network
1737105ms th_a db_block.cpp:97 push_block ] new_block.block_num(): 206549 new_block.id(): 000326d508b9036f52b63162bb2561d47b2988d9
1747107ms th_a application.cpp:383 handle_block ] Got block #206549 from network
1747107ms th_a db_block.cpp:97 push_block ] new_block.block_num(): 206549 new_block.id(): 000326d532008089670958f7496e38e4b2537e3e
1757103ms th_a application.cpp:383 handle_block ] Got block #206549 from network
1757104ms th_a db_block.cpp:97 push_block ] new_block.block_num(): 206549 new_block.id(): 000326d51bbe23a5c436dd55b2b4366a59d0906c
1762108ms th_a application.cpp:383 handle_block ] Got block #206549 from network
1762109ms th_a db_block.cpp:97 push_block ] new_block.block_num(): 206549 new_block.id(): 000326d56496799377d7f45fd9e367cc51ae72fb
1767104ms th_a application.cpp:383 handle_block ] Got block #206549 from network
1767104ms th_a db_block.cpp:97 push_block ] new_block.block_num(): 206549 new_block.id(): 000326d55790fa2d0fdcd09c02bbf0e8efa37e28
1777116ms th_a application.cpp:383 handle_block ] Got block #206549 from network
1777116ms th_a db_block.cpp:97 push_block ] new_block.block_num(): 206549 new_block.id(): 000326d5d595541d5c3e5691666fc52a9a72dbd1
1782104ms th_a application.cpp:383 handle_block ] Got block #206549 from network
1782104ms th_a db_block.cpp:97 push_block ] new_block.block_num(): 206549 new_block.id(): 000326d5c5706a78afd68b1bfc9624bca0082f73
1787106ms th_a application.cpp:383 handle_block ] Got block #206549 from network
1787106ms th_a db_block.cpp:97 push_block ] new_block.block_num(): 206549 new_block.id(): 000326d5173ef9325e7fe14bb657bdb8a5e38dce
1792105ms th_a application.cpp:383 handle_block ] Got block #206549 from network
1792105ms th_a db_block.cpp:97 push_block ] new_block.block_num(): 206549 new_block.id(): 000326d538b13494e6dfea6455ae8baa802ec509
1807107ms th_a application.cpp:383 handle_block ] Got block #206549 from network
1807107ms th_a db_block.cpp:97 push_block ] new_block.block_num(): 206549 new_block.id(): 000326d5da8d3ee1d5ae32ce20266bac1006794c
1812112ms th_a application.cpp:383 handle_block ] Got block #206549 from network
1812113ms th_a db_block.cpp:97 push_block ] new_block.block_num(): 206549 new_block.id(): 000326d5f1de54377d4aacbb540aa1240c55cdd6
1822106ms th_a application.cpp:383 handle_block ] Got block #206549 from network
1822106ms th_a db_block.cpp:97 push_block ] new_block.block_num(): 206549 new_block.id(): 000326d5990eb7c1330bde9dc37931367c4d47ff
1827107ms th_a application.cpp:383 handle_block ] Got block #206549 from network
1827108ms th_a db_block.cpp:97 push_block ] new_block.block_num(): 206549 new_block.id(): 000326d5c6d5ebf67c31a4cbdde664dc96d82d15
1832107ms th_a application.cpp:383 handle_block ] Got block #206549 from network
1832107ms th_a db_block.cpp:97 push_block ] new_block.block_num(): 206549 new_block.id(): 000326d5a25274db6aaeec31adc1717a6fe48767
Do I get the same (different) block over and over?
Code: [Select]1657120ms th_a db_block.cpp:97 push_block ] new_block.block_num(): 206549 new_block.id(): 000326d56377e74af799db9b758a71c7e9163979
1667123ms th_a application.cpp:383 handle_block ] Got block #206549 from network
1667123ms th_a db_block.cpp:97 push_block ] new_block.block_num(): 206549 new_block.id(): 000326d59c6c160f7174fbfbba5e6b8c63ae1b95
1672747ms th_a application.cpp:383 handle_block ] Got block #206558 from network
1672747ms th_a db_block.cpp:97 push_block ] new_block.block_num(): 206558 new_block.id(): 000326dee3d9a84169d0bdf5f0b46a459a768795
1672870ms th_a db_with.hpp:80 ~pending_transaction ] Pending transaction became invalid after switching to block 000326dee3d9a84169d0bdf5f0b46a459a768795
1672871ms th_a db_with.hpp:81 ~pending_transaction ] The invalid pending transaction is {"ref_block_num":9949,"ref_block_prefix":2962347885,"expiration":"2015-09-15T12:21:00","operations":[[0,{"fee":{"amount":2000000,"asset_id":"1.3.0"},"from":"1.2.28323","to":"1.2.69491","amount":{"amount":100,"asset_id":"1.3.0"},"extensions":[]}]],"extensions":[],"signatures":["2028d818a60cd6ab98b5ad892c3a2e84b44346c8489b6ceadef390e741c9392ffb510f0da09874b52239b76cad57586ad2bcafa4606092bc75314c7093aa39a32d"],"operation_results":[[0,{}]]}
1672871ms th_a db_with.hpp:82 ~pending_transaction ] The invalid pending transaction caused exception {"code":10,"name":"assert_exception","message":"Assert Exception","stack":[{"context":{"level":"error","file":"db_block.cpp","line":533,"method":"_apply_transaction","hostname":"","thread_name":"th_a","timestamp":"2015-09-15T12:27:52"},"format":"_pending_block.timestamp <= trx.expiration: ","data":{"pending.timestamp":"2015-09-15T12:27:55","trx.exp":"2015-09-15T12:21:00"}},{"context":{"level":"warn","file":"db_block.cpp","line":563,"method":"_apply_transaction","hostname":"","thread_name":"th_a","timestamp":"2015-09-15T12:27:52"},"format":"","data":{"trx":{"ref_block_num":9949,"ref_block_prefix":2962347885,"expiration":"2015-09-15T12:21:00","operations":[[0,{"fee":{"amount":2000000,"asset_id":"1.3.0"},"from":"1.2.28323","to":"1.2.69491","amount":{"amount":100,"asset_id":"1.3.0"},"extensions":[]}]],"extensions":[],"signatures":["2028d818a60cd6ab98b5ad892c3a2e84b44346c8489b6ceadef390e741c9392ffb510f0da09874b52239b76cad57586ad2bcafa4606092bc75314c7093aa39a32d"]}}},{"context":{"level":"warn","file":"db_block.cpp","line":205,"method":"push_transaction","hostname":"","thread_name":"th_a","timestamp":"2015-09-15T12:27:52"},"format":"","data":{"trx":{"ref_block_num":9949,"ref_block_prefix":2962347885,"expiration":"2015-09-15T12:21:00","operations":[[0,{"fee":{"amount":2000000,"asset_id":"1.3.0"},"from":"1.2.28323","to":"1.2.69491","amount":{"amount":100,"asset_id":"1.3.0"},"extensions":[]}]],"extensions":[],"signatures":["2028d818a60cd6ab98b5ad892c3a2e84b44346c8489b6ceadef390e741c9392ffb510f0da09874b52239b76cad57586ad2bcafa4606092bc75314c7093aa39a32d"]}}}]}
1672871ms th_a db_with.hpp:80 ~pending_transaction ] Pending transaction became invalid after switching to block 000326dee3d9a84169d0bdf5f0b46a459a768795
1672871ms th_a db_with.hpp:81 ~pending_transaction ] The invalid pending transaction is {"ref_block_num":9949,"ref_block_prefix":2962347885,"expiration":"2015-09-15T12:21:01","operations":[[0,{"fee":{"amount":2000000,"asset_id":"1.3.0"},"from":"1.2.28323","to":"1.2.69491","amount":{"amount":100,"asset_id":"1.3.0"},"extensions":[]}]],"extensions":[],"signatures":["2006c75e648704e5b50bd596c2bb7f5bb34f5799559a1116d82c8d490f773bc611244773c9ea44b4546e72e25842662b97adc1158b5e1428403fede8eba885f22e"],"operation_results":[[0,{}]]}
1672872ms th_a db_with.hpp:82 ~pending_transaction ] The invalid pending transaction caused exception {"code":10,"name":"assert_exception","message":"Assert Exception","stack":[{"context":{"level":"error","file":"db_block.cpp","line":533,"method":"_apply_transaction","hostname":"","thread_name":"th_a","timestamp":"2015-09-15T12:27:52"},"format":"_pending_block.timestamp <= trx.expiration: ","data":{"pending.timestamp":"2015-09-15T12:27:55","trx.exp":"2015-09-15T12:21:01"}},{"context":{"level":"warn","file":"db_block.cpp","line":563,"method":"_apply_transaction","hostname":"","thread_name":"th_a","timestamp":"2015-09-15T12:27:52"},"format":"","data":{"trx":{"ref_block_num":9949,"ref_block_prefix":2962347885,"expiration":"2015-09-15T12:21:01","operations":[[0,{"fee":{"amount":2000000,"asset_id":"1.3.0"},"from":"1.2.28323","to":"1.2.69491","amount":{"amount":100,"asset_id":"1.3.0"},"extensions":[]}]],"extensions":[],"signatures":["2006c75e648704e5b50bd596c2bb7f5bb34f5799559a1116d82c8d490f773bc611244773c9ea44b4546e72e25842662b97adc1158b5e1428403fede8eba885f22e"]}}},{"context":{"level":"warn","file":"db_block.cpp","line":205,"method":"push_transaction","hostname":"","thread_name":"th_a","timestamp":"2015-09-15T12:27:52"},"format":"","data":{"trx":{"ref_block_num":9949,"ref_block_prefix":2962347885,"expiration":"2015-09-15T12:21:01","operations":[[0,{"fee":{"amount":2000000,"asset_id":"1.3.0"},"from":"1.2.28323","to":"1.2.69491","amount":{"amount":100,"asset_id":"1.3.0"},"extensions":[]}]],"extensions":[],"signatures":["2006c75e648704e5b50bd596c2bb7f5bb34f5799559a1116d82c8d490f773bc611244773c9ea44b4546e72e25842662b97adc1158b5e1428403fede8eba885f22e"]}}}]}
1672872ms th_a db_with.hpp:80 ~pending_transaction ] Pending transaction became invalid after switching to block 000326dee3d9a84169d0bdf5f0b46a459a768795
1672872ms th_a db_with.hpp:81 ~pending_transaction ] The invalid pending transaction is {"ref_block_num":9949,"ref_block_prefix":2962347885,"expiration":"2015-09-15T12:21:00","operations":[[0,{"fee":{"amount":2000000,"asset_id":"1.3.0"},"from":"1.2.28323","to":"1.2.69491","amount":{"amount":10,"asset_id":"1.3.0"},"extensions":[]}]],"extensions":[],"signatures":["1f533969fa5a32713259232de97ab741958885d7e5f638d9cd8915cf416357c3c65c54365f2fce8aa89c44893fb30639bc623836ae93b13976a3906699fe581954"],"operation_results":[[0,{}]]}
1672872ms th_a db_with.hpp:82 ~pending_transaction ] The invalid pending transaction caused exception {"code":10,"name":"assert_exception","message":"Assert Exception","stack":[{"context":{"level":"error","file":"db_block.cpp","line":533,"method":"_apply_transaction","hostname":"","thread_name":"th_a","timestamp":"2015-09-15T12:27:52"},"format":"_pending_block.timestamp <= trx.expiration: ","data":{"pending.timestamp":"2015-09-15T12:27:55","trx.exp":"2015-09-15T12:21:00"}},{"context":{"level":"warn","file":"db_block.cpp","line":563,"method":"_apply_transaction","hostname":"","thread_name":"th_a","timestamp":"2015-09-15T12:27:52"},"format":"","data":{"trx":{"ref_block_num":9949,"ref_block_prefix":2962347885,"expiration":"2015-09-15T12:21:00","operations":[[0,{"fee":{"amount":2000000,"asset_id":"1.3.0"},"from":"1.2.28323","to":"1.2.69491","amount":{"amount":10,"asset_id":"1.3.0"},"extensions":[]}]],"extensions":[],"signatures":["1f533969fa5a32713259232de97ab741958885d7e5f638d9cd8915cf416357c3c65c54365f2fce8aa89c44893fb30639bc623836ae93b13976a3906699fe581954"]}}},{"context":{"level":"warn","file":"db_block.cpp","line":205,"method":"push_transaction","hostname":"","thread_name":"th_a","timestamp":"2015-09-15T12:27:52"},"format":"","data":{"trx":{"ref_block_num":9949,"ref_block_prefix":2962347885,"expiration":"2015-09-15T12:21:00","operations":[[0,{"fee":{"amount":2000000,"asset_id":"1.3.0"},"from":"1.2.28323","to":"1.2.69491","amount":{"amount":10,"asset_id":"1.3.0"},"extensions":[]}]],"extensions":[],"signatures":["1f533969fa5a32713259232de97ab741958885d7e5f638d9cd8915cf416357c3c65c54365f2fce8aa89c44893fb30639bc623836ae93b13976a3906699fe581954"]}}}]}
Not getting new blocks atm .. seems stuck
2931434ms th_a application.cpp:688 get_blockchain_synop ] synopsis: ["000326d238e7ae19d2c3835531c0ba5ae8024b78"]
2931786ms th_a db_block.cpp:105 push_block ] new_block.block_num(): 206547 new_block.id(): 000326d32d03860502c20f80cfba7188a58902cd new_block.witness: 1.6.28 new_block.timestamp: 2015-09-15T12:19:15
2931787ms th_a db_block.cpp:105 push_block ] new_block.block_num(): 206548 new_block.id(): 000326d418f9954d1a053be245d67abcc74fb333 new_block.witness: 1.6.42 new_block.timestamp: 2015-09-15T12:19:25
2931787ms th_a db_block.cpp:105 push_block ] new_block.block_num(): 206549 new_block.id(): 000326d5b6eeec78e0cf9c0a6f4b7c495150568e new_block.witness: 1.6.24 new_block.timestamp: 2015-09-15T12:19:30
2931788ms th_a db_block.cpp:105 push_block ] new_block.block_num(): 206550 new_block.id(): 000326d66e59c5d4d44f3d7e24bc8202b8fc822d new_block.witness: 1.6.13 new_block.timestamp: 2015-09-15T12:19:35
2931788ms th_a db_block.cpp:105 push_block ] new_block.block_num(): 206551 new_block.id(): 000326d7acf81a715dc0d1f3ddd037ed54f36ab5 new_block.witness: 1.6.58 new_block.timestamp: 2015-09-15T12:19:40
2931789ms th_a db_block.cpp:105 push_block ] new_block.block_num(): 206552 new_block.id(): 000326d8568bc6fa597261862a7186dcd8a98667 new_block.witness: 1.6.69 new_block.timestamp: 2015-09-15T12:19:45
2931790ms th_a db_block.cpp:105 push_block ] new_block.block_num(): 206553 new_block.id(): 000326d9a89fbb94f98652b537b6858d3a07269d new_block.witness: 1.6.68 new_block.timestamp: 2015-09-15T12:19:50
2931790ms th_a db_block.cpp:105 push_block ] new_block.block_num(): 206554 new_block.id(): 000326da9c0f44e5ff5ea5ee4019a0a1ccdb3f66 new_block.witness: 1.6.39 new_block.timestamp: 2015-09-15T12:19:55
2931791ms th_a db_block.cpp:105 push_block ] new_block.block_num(): 206555 new_block.id(): 000326dbba7f985cd09b8b9b2492d14bc7e18ddb new_block.witness: 1.6.51 new_block.timestamp: 2015-09-15T12:20:00
2931847ms th_a db_block.cpp:105 push_block ] new_block.block_num(): 206556 new_block.id(): 000326dce6e9ebaf766cae6139a539a731893c9e new_block.witness: 1.6.5252 new_block.timestamp: 2015-09-15T12:20:25
2931847ms th_a db_block.cpp:105 push_block ] new_block.block_num(): 206557 new_block.id(): 000326dd6dd791b001061dff9002e71ad2e3a85f new_block.witness: 1.6.1063 new_block.timestamp: 2015-09-15T12:20:30
2931848ms th_a db_block.cpp:105 push_block ] new_block.block_num(): 206558 new_block.id(): 000326dee3d9a84169d0bdf5f0b46a459a768795 new_block.witness: 1.6.1538 new_block.timestamp: 2015-09-15T12:27:50
2931902ms th_a db_block.cpp:105 push_block ] new_block.block_num(): 206559 new_block.id(): 000326df5f1d8789b1d4be4fcc709f488736b3f0 new_block.witness: 1.6.1527 new_block.timestamp: 2015-09-15T12:30:45
2931958ms th_a db_block.cpp:105 push_block ] new_block.block_num(): 206560 new_block.id(): 000326e05ee43c32b52bd4924d9ef846da414ff3 new_block.witness: 1.6.4232 new_block.timestamp: 2015-09-15T12:32:15
2931959ms th_a db_block.cpp:105 push_block ] new_block.block_num(): 206561 new_block.id(): 000326e15aad53e16fae3eaeb3f904dccbeea335 new_block.witness: 1.6.3968 new_block.timestamp: 2015-09-15T12:32:25
2931959ms th_a db_block.cpp:105 push_block ] new_block.block_num(): 206562 new_block.id(): 000326e27314e948f62035d14f5dbcf9774bb2e7 new_block.witness: 1.6.1527 new_block.timestamp: 2015-09-15T12:36:25
2932021ms th_a db_block.cpp:105 push_block ] new_block.block_num(): 206563 new_block.id(): 000326e3b3a2f68bf1004ac6360033b34bdff358 new_block.witness: 1.6.1538 new_block.timestamp: 2015-09-15T12:37:15
2932021ms th_a db_block.cpp:105 push_block ] new_block.block_num(): 206564 new_block.id(): 000326e40425207efe4ef960af42e0e918fb77c5 new_block.witness: 1.6.1063 new_block.timestamp: 2015-09-15T12:37:35
2932022ms th_a db_block.cpp:105 push_block ] new_block.block_num(): 206565 new_block.id(): 000326e5289b7f016a6903f87bf36ea6adac931c new_block.witness: 1.6.4232 new_block.timestamp: 2015-09-15T12:37:45
2932022ms th_a db_block.cpp:105 push_block ] new_block.block_num(): 206566 new_block.id(): 000326e65b9d1cec3c728ac2ee653925da819a1d new_block.witness: 1.6.5247 new_block.timestamp: 2015-09-15T12:39:55
2932022ms th_a db_block.cpp:105 push_block ] new_block.block_num(): 206567 new_block.id(): 000326e79d32f42bb6d06efb780a4a15b9b946a4 new_block.witness: 1.6.3968 new_block.timestamp: 2015-09-15T12:40:25
2932081ms th_a application.cpp:688 get_blockchain_synop ] synopsis: ["000326d238e7ae19d2c3835531c0ba5ae8024b78","000326dd6dd791b001061dff9002e71ad2e3a85f","000326e3b3a2f68bf1004ac6360033b34bdff358","000326e65b9d1cec3c728ac2ee653925da819a1d","000326e79d32f42bb6d06efb780a4a15b9b946a4"]
Looks like the master node with the init witnesses is stuck (probably because Ben hasn't been around to update it with all the latest changes)Code: [Select]2931434ms th_a application.cpp:688 get_blockchain_synop ] synopsis: ["000326d238e7ae19d2c3835531c0ba5ae8024b78"]
2931786ms th_a db_block.cpp:105 push_block ] new_block.block_num(): 206547 new_block.id(): 000326d32d03860502c20f80cfba7188a58902cd new_block.witness: 1.6.28 new_block.timestamp: 2015-09-15T12:19:15
2931787ms th_a db_block.cpp:105 push_block ] new_block.block_num(): 206548 new_block.id(): 000326d418f9954d1a053be245d67abcc74fb333 new_block.witness: 1.6.42 new_block.timestamp: 2015-09-15T12:19:25
2931787ms th_a db_block.cpp:105 push_block ] new_block.block_num(): 206549 new_block.id(): 000326d5b6eeec78e0cf9c0a6f4b7c495150568e new_block.witness: 1.6.24 new_block.timestamp: 2015-09-15T12:19:30
2931788ms th_a db_block.cpp:105 push_block ] new_block.block_num(): 206550 new_block.id(): 000326d66e59c5d4d44f3d7e24bc8202b8fc822d new_block.witness: 1.6.13 new_block.timestamp: 2015-09-15T12:19:35
2931788ms th_a db_block.cpp:105 push_block ] new_block.block_num(): 206551 new_block.id(): 000326d7acf81a715dc0d1f3ddd037ed54f36ab5 new_block.witness: 1.6.58 new_block.timestamp: 2015-09-15T12:19:40
2931789ms th_a db_block.cpp:105 push_block ] new_block.block_num(): 206552 new_block.id(): 000326d8568bc6fa597261862a7186dcd8a98667 new_block.witness: 1.6.69 new_block.timestamp: 2015-09-15T12:19:45
2931790ms th_a db_block.cpp:105 push_block ] new_block.block_num(): 206553 new_block.id(): 000326d9a89fbb94f98652b537b6858d3a07269d new_block.witness: 1.6.68 new_block.timestamp: 2015-09-15T12:19:50
2931790ms th_a db_block.cpp:105 push_block ] new_block.block_num(): 206554 new_block.id(): 000326da9c0f44e5ff5ea5ee4019a0a1ccdb3f66 new_block.witness: 1.6.39 new_block.timestamp: 2015-09-15T12:19:55
2931791ms th_a db_block.cpp:105 push_block ] new_block.block_num(): 206555 new_block.id(): 000326dbba7f985cd09b8b9b2492d14bc7e18ddb new_block.witness: 1.6.51 new_block.timestamp: 2015-09-15T12:20:00
2931847ms th_a db_block.cpp:105 push_block ] new_block.block_num(): 206556 new_block.id(): 000326dce6e9ebaf766cae6139a539a731893c9e new_block.witness: 1.6.5252 new_block.timestamp: 2015-09-15T12:20:25
2931847ms th_a db_block.cpp:105 push_block ] new_block.block_num(): 206557 new_block.id(): 000326dd6dd791b001061dff9002e71ad2e3a85f new_block.witness: 1.6.1063 new_block.timestamp: 2015-09-15T12:20:30
2931848ms th_a db_block.cpp:105 push_block ] new_block.block_num(): 206558 new_block.id(): 000326dee3d9a84169d0bdf5f0b46a459a768795 new_block.witness: 1.6.1538 new_block.timestamp: 2015-09-15T12:27:50
2931902ms th_a db_block.cpp:105 push_block ] new_block.block_num(): 206559 new_block.id(): 000326df5f1d8789b1d4be4fcc709f488736b3f0 new_block.witness: 1.6.1527 new_block.timestamp: 2015-09-15T12:30:45
2931958ms th_a db_block.cpp:105 push_block ] new_block.block_num(): 206560 new_block.id(): 000326e05ee43c32b52bd4924d9ef846da414ff3 new_block.witness: 1.6.4232 new_block.timestamp: 2015-09-15T12:32:15
2931959ms th_a db_block.cpp:105 push_block ] new_block.block_num(): 206561 new_block.id(): 000326e15aad53e16fae3eaeb3f904dccbeea335 new_block.witness: 1.6.3968 new_block.timestamp: 2015-09-15T12:32:25
2931959ms th_a db_block.cpp:105 push_block ] new_block.block_num(): 206562 new_block.id(): 000326e27314e948f62035d14f5dbcf9774bb2e7 new_block.witness: 1.6.1527 new_block.timestamp: 2015-09-15T12:36:25
2932021ms th_a db_block.cpp:105 push_block ] new_block.block_num(): 206563 new_block.id(): 000326e3b3a2f68bf1004ac6360033b34bdff358 new_block.witness: 1.6.1538 new_block.timestamp: 2015-09-15T12:37:15
2932021ms th_a db_block.cpp:105 push_block ] new_block.block_num(): 206564 new_block.id(): 000326e40425207efe4ef960af42e0e918fb77c5 new_block.witness: 1.6.1063 new_block.timestamp: 2015-09-15T12:37:35
2932022ms th_a db_block.cpp:105 push_block ] new_block.block_num(): 206565 new_block.id(): 000326e5289b7f016a6903f87bf36ea6adac931c new_block.witness: 1.6.4232 new_block.timestamp: 2015-09-15T12:37:45
2932022ms th_a db_block.cpp:105 push_block ] new_block.block_num(): 206566 new_block.id(): 000326e65b9d1cec3c728ac2ee653925da819a1d new_block.witness: 1.6.5247 new_block.timestamp: 2015-09-15T12:39:55
2932022ms th_a db_block.cpp:105 push_block ] new_block.block_num(): 206567 new_block.id(): 000326e79d32f42bb6d06efb780a4a15b9b946a4 new_block.witness: 1.6.3968 new_block.timestamp: 2015-09-15T12:40:25
2932081ms th_a application.cpp:688 get_blockchain_synop ] synopsis: ["000326d238e7ae19d2c3835531c0ba5ae8024b78","000326dd6dd791b001061dff9002e71ad2e3a85f","000326e3b3a2f68bf1004ac6360033b34bdff358","000326e65b9d1cec3c728ac2ee653925da819a1d","000326e79d32f42bb6d06efb780a4a15b9b946a4"]
It looks like everyone who has upgraded is still producing blocks when you get the chance.
When Ben gets in today I will have him update the main node.
247300ms th_a db_block.cpp:181 _push_block ] Failed to push new block:
3070000 undo_database_exception: undo database exception
The database does not have enough undo history to support a blockchain with so many missed blocks. Please add a checkpoint if you would like to continue applying blocks beyond this point.
{"recently_missed":13547,"max_undo":1000}
th_a db_update.cpp:72 update_global_dynamic_data
{"next_block.block_num()":206568}
th_a db_block.cpp:469 _apply_block
We may need a new chain if the master node missed too many blocks.May I get an update on the current Testnet parameters/status?
known active seed node(s) :
chain_id :
head_block_id (at time) :
info
{
"head_block_num": 206567,
"head_block_id": "000326e79d32f42bb6d06efb780a4a15b9b946a4",
"head_block_age": "18 hours old",
"next_maintenance_time": "18 hours ago",
"chain_id": "ecbde738ba0b319cb4d266e613b200d010da8b37313c20aec03f9c8e2d9b35e3",
"participation": "6.25000000000000000",
"active_witnesses": [
At this point the test net is dead. We are waiting on a new one.
What is an easy way to compare performance of one of my VPS witnesses to my cubox (arm) witness?--replay-blockchain
init witnesses are down for an upgradeAt this point the test net is dead. We are waiting on a new one.
How do we have a live countdown to launch when the testnet doesn't even work? Am I missing something here?
init witnesses are down for an upgradeVery well.
We are preparing a new test network that will be a dry-run for the actual BTS 2.0 launch process. This means the following:
1. It will have 3 second blocks
2. It will have 1 real computer per witness (but we will control all initial witnesses)
3. There will only be 11 initial witnesses, and we will VOTE in more
4. We will have 1 hour maintenance intervals, this should increase reindexing time by a factor 12
5. We will have lower transfer fees for testing purposes
6. We will be producing price feeds for bitassets
7. We will be producing a Mac / Windows GUI distribution
8. There will be a new genesis file from a recent snapshot.
The GUI distributions will be available next week. This week we are focusing on our infrastructure.
Before we can launch this test network we are putting in some scheduled minor hard-forking changes that have built up over the past several weeks of the last test network.
Our goal for this network is to pretend it is the real launch and to have it go as smoothly as possible.
We are preparing a new test network that will be a dry-run [...] Our goal for this network is to pretend it is the real launch and to have it go as smoothly as possible.Very much appreciate the confirmation.
We are preparing a new test network that will be a dry-run for the actual BTS 2.0 launch process. This means the following:happy to hear this +5%
1. It will have 3 second blocks
2. It will have 1 real computer per witness (but we will control all initial witnesses)
3. There will only be 11 initial witnesses, and we will VOTE in more
4. We will have 1 hour maintenance intervals, this should increase reindexing time by a factor 12
5. We will have lower transfer fees for testing purposes
6. We will be producing price feeds for bitassets
7. We will be producing a Mac / Windows GUI distribution
8. There will be a new genesis file from a recent snapshot.
The GUI distributions will be available next week. This week we are focusing on our infrastructure.
Before we can launch this test network we are putting in some scheduled minor hard-forking changes that have built up over the past several weeks of the last test network.
Our goal for this network is to pretend it is the real launch and to have it go as smoothly as possible.
+5%
engines idle.....
We are preparing a new test network that will be a dry-run for the actual BTS 2.0 launch process. This means the following:Is the 0.9.3 update coming, so that we can easily import keys?
1. It will have 3 second blocks
2. It will have 1 real computer per witness (but we will control all initial witnesses)
3. There will only be 11 initial witnesses, and we will VOTE in more
4. We will have 1 hour maintenance intervals, this should increase reindexing time by a factor 12
5. We will have lower transfer fees for testing purposes
6. We will be producing price feeds for bitassets
7. We will be producing a Mac / Windows GUI distribution
8. There will be a new genesis file from a recent snapshot.
The GUI distributions will be available next week. This week we are focusing on our infrastructure.
Before we can launch this test network we are putting in some scheduled minor hard-forking changes that have built up over the past several weeks of the last test network.
Our goal for this network is to pretend it is the real launch and to have it go as smoothly as possible.
We are preparing a new test network that will be a dry-run for the actual BTS 2.0 launch process. This means the following:Is the 0.9.3 update coming, so that we can easily import keys?
1. It will have 3 second blocks
2. It will have 1 real computer per witness (but we will control all initial witnesses)
3. There will only be 11 initial witnesses, and we will VOTE in more
4. We will have 1 hour maintenance intervals, this should increase reindexing time by a factor 12
5. We will have lower transfer fees for testing purposes
6. We will be producing price feeds for bitassets
7. We will be producing a Mac / Windows GUI distribution
8. There will be a new genesis file from a recent snapshot.
The GUI distributions will be available next week. This week we are focusing on our infrastructure.
Before we can launch this test network we are putting in some scheduled minor hard-forking changes that have built up over the past several weeks of the last test network.
Our goal for this network is to pretend it is the real launch and to have it go as smoothly as possible.
0.9.3 has some issues?Source?
0.9.3 has some issues?Source?
We are preparing a new test network that will be a dry-run for the actual BTS 2.0 launch process. This means the following:Is the 0.9.3 update coming, so that we can easily import keys?
1. It will have 3 second blocks
2. It will have 1 real computer per witness (but we will control all initial witnesses)
3. There will only be 11 initial witnesses, and we will VOTE in more
4. We will have 1 hour maintenance intervals, this should increase reindexing time by a factor 12
5. We will have lower transfer fees for testing purposes
6. We will be producing price feeds for bitassets
7. We will be producing a Mac / Windows GUI distribution
8. There will be a new genesis file from a recent snapshot.
The GUI distributions will be available next week. This week we are focusing on our infrastructure.
Before we can launch this test network we are putting in some scheduled minor hard-forking changes that have built up over the past several weeks of the last test network.
Our goal for this network is to pretend it is the real launch and to have it go as smoothly as possible.
Just released it. Windows build is in the works.
We are preparing a new test network that will be a dry-run for the actual BTS 2.0 launch process. This means the following:
1. It will have 3 second blocks
2. It will have 1 real computer per witness (but we will control all initial witnesses)
3. There will only be 11 initial witnesses, and we will VOTE in more
4. We will have 1 hour maintenance intervals, this should increase reindexing time by a factor 12
5. We will have lower transfer fees for testing purposes
6. We will be producing price feeds for bitassets
7. We will be producing a Mac / Windows GUI distribution
8. There will be a new genesis file from a recent snapshot.
The GUI distributions will be available next week. This week we are focusing on our infrastructure.
Before we can launch this test network we are putting in some scheduled minor hard-forking changes that have built up over the past several weeks of the last test network.
Our goal for this network is to pretend it is the real launch and to have it go as smoothly as possible.
Updating 81cc8e4..f0502ee
Fast-forward
libraries/app/database_api.cpp | 1 +
libraries/app/impacted.cpp | 11 +++++++-
libraries/chain/db_block.cpp | 174 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-------------------------------------------------------
libraries/chain/db_maint.cpp | 21 +++------------
libraries/chain/db_management.cpp | 44 +++++++++---------------------
libraries/chain/db_market.cpp | 11 ++++++--
libraries/chain/db_update.cpp | 6 -----
libraries/chain/fork_database.cpp | 2 ++
libraries/chain/include/graphene/chain/database.hpp | 10 ++++---
libraries/chain/include/graphene/chain/db_with.hpp | 30 ++++++++++++++++-----
libraries/chain/include/graphene/chain/exceptions.hpp | 2 ++
libraries/chain/include/graphene/chain/protocol/asset_ops.hpp | 25 +++++++++++++++++
libraries/chain/include/graphene/chain/protocol/operations.hpp | 6 +++--
libraries/chain/proposal_evaluator.cpp | 2 +-
libraries/chain/vesting_balance_evaluator.cpp | 2 +-
tests/common/database_fixture.cpp | 14 +++++++---
tests/tests/authority_tests.cpp | 32 ++++++++++++----------
tests/tests/block_tests.cpp | 8 +++---
18 files changed, 231 insertions(+), 170 deletions(-)
james@james-desktop:~/github/graphene$ git submodule update --init --recursive
james@james-desktop:~/github/graphene$
I see the recent commit, but do not see a new tag or release. Is there a new testnet up using master? What is the seed node(s) and where is the genesis.json file located?I think there is no new test net right now. The last one is dead, so we need to wait for the new one.
I see the recent commit, but do not see a new tag or release. Is there a new testnet up using master? What is the seed node(s) and where is the genesis.json file located?I think there is no new test net right now. The last one is dead, so we need to wait for the new one.
Testers that want to flood the network, do not use your witness node as the RPC server.Building..
Also please take note of the new logging to the console that includes latencies of blocks received. Those on a DigitalOcean machine in NYC3 should have around 0 MS latency for many blocks ;)
Also... to get your witness voted in you will have to vote for all 11 init witnesses PLUS your own witness.
https://github.com/cryptonomex/graphene/releases
New Testnet is Here!
Please post your witness name so I can vote them in and add them to the growing list of active witnesses.
get_witness in.abit
{
"id": "1.6.5247",
"witness_account": "1.2.38993",
"last_aslot": 0,
"signing_key": "GPH65XNUxWdYGqGyW9NtXdRpNntumLYT1cJ7CNE7F78Pwxrnx6cbV",
"vote_id": "1:5267",
"total_votes": 0,
"url": "https://github.com/abitmore",
"total_missed": 0
}
@bytemaster
When are windows and Mac binaries of 0.9.3 available?
https://github.com/cryptonomex/graphene/releases
New Testnet is Here!
Please post your witness name so I can vote them in and add them to the growing list of active witnesses.
Running.Code: [Select]get_witness in.abit
{
"id": "1.6.5247",
"witness_account": "1.2.38993",
"last_aslot": 0,
"signing_key": "GPH65XNUxWdYGqGyW9NtXdRpNntumLYT1cJ7CNE7F78Pwxrnx6cbV",
"vote_id": "1:5267",
"total_votes": 0,
"url": "https://github.com/abitmore",
"total_missed": 0
}
Thanks.https://github.com/cryptonomex/graphene/releases
New Testnet is Here!
Please post your witness name so I can vote them in and add them to the growing list of active witnesses.
Running.Code: [Select]get_witness in.abit
{
"id": "1.6.5247",
"witness_account": "1.2.38993",
"last_aslot": 0,
"signing_key": "GPH65XNUxWdYGqGyW9NtXdRpNntumLYT1cJ7CNE7F78Pwxrnx6cbV",
"vote_id": "1:5267",
"total_votes": 0,
"url": "https://github.com/abitmore",
"total_missed": 0
}
Voted: "next_maintenance_time": "44 minutes in the future" is when you will be active.
https://graphene.bitshares.org is still running in test2 network?
get_witness xeldal
{
"id": "1.6.4949",
"witness_account": "1.2.86459",
get_witness fox
{
"id": "1.6.2104",
"witness_account": "1.2.30566",
get_witness wackou
{
"id": "1.6.5248",
"witness_account": "1.2.83349",
"last_aslot": 0,
"signing_key": "GPH8C1Cz3LDu732VT74bYvNE2G25NLghV96zcMnFwLd4Z6aXWup9i",
"vote_id": "1:5268",
"total_votes": 0,
"url": "http://digitalgaia.io",
"total_missed": 0
}
set_voting_proxy your-account-name puppies true
get_witness init10
{
"id": "1.6.11",
"witness_account": "1.2.110",
...
"total_votes": "857124062634",
...
}
Remind me of the precision of CORE. 5 or 6? Do we need 8.5M or 850K votes to be elected as a witness?Code: [Select]get_witness init10
{
"id": "1.6.11",
"witness_account": "1.2.110",
...
"total_votes": "857124062634",
...
}
To be honest, I was not planning to bring that much stake to this chain at this time. Your votes are appreciated.
Looks like BM voted me in, but voted out 1.6.11 (or init10?)Remind me of the precision of CORE. 5 or 6? Do we need 8.5M or 850K votes to be elected as a witness?Code: [Select]get_witness init10
{
"id": "1.6.11",
"witness_account": "1.2.110",
...
"total_votes": "857124062634",
...
}
To be honest, I was not planning to bring that much stake to this chain at this time. Your votes are appreciated.
thats not me. I think that is BM. We aren't trying to outvote the init witnesses. We are trying to increase the average number of witnesses voted for by account. That is how the number of witnesses that are active is determined. Or at least thats the way it was a couple of weeks ago, and I don't think that changed.
Should know if it worked in about 2 minutes.
I'll remove init 10 from my vote. That will get you back in in an hour. I think we will have to wait for dan to vote for more tomorrow. Sorry about that abit. I was hoping we could increase the witness slots.Looks like BM voted me in, but voted out 1.6.11 (or init10?)Remind me of the precision of CORE. 5 or 6? Do we need 8.5M or 850K votes to be elected as a witness?Code: [Select]get_witness init10
{
"id": "1.6.11",
"witness_account": "1.2.110",
...
"total_votes": "857124062634",
...
}
To be honest, I was not planning to bring that much stake to this chain at this time. Your votes are appreciated.
thats not me. I think that is BM. We aren't trying to outvote the init witnesses. We are trying to increase the average number of witnesses voted for by account. That is how the number of witnesses that are active is determined. Or at least thats the way it was a couple of weeks ago, and I don't think that changed.
Should know if it worked in about 2 minutes.
Don't know why.
//Edit:
Ah, I know, because in.abit voted for in.abit only in last round.
Looks like currently the max number of witnesses is 11.
After set proxy to puppies, in.abit votes for all witnesses, so it got voted out..
get_witness riverhead
{
"id": "1.6.3968",
"witness_account": "1.2.67253",
"last_aslot": 0,
"signing_key": "GPH6BJYGHftujnbttFFKX6YacnvsMd4sbJrbucg682GiU4vmXHTik",
"vote_id": "1:3967",
"total_votes": 0,
"url": "",
"total_missed": 0
james@james-desktop:~/github/graphene/programs/cli_wallet$ ./cli_wallet -w test3 --chain-id 0f8b631d7a9dfebf16d6776fab96b629a14429762bf9c3eb95db1e4e4af637a4 -s ws://192.168.1.11:8090
Logging RPC to file: logs/rpc/rpc.log
109742ms th_a main.cpp:114 main ] key_to_wif( committee_private_key ): 5KCBDTcyDqzsqehcb52tW5nU6pXife6V2rX9Yf7c3saYSzbDZ5W
109743ms th_a main.cpp:118 main ] nathan_pub_key: GPH6MRyAjQq8ud7hVNYcfnVPJqcVpscN5So8BhtHuGYqET5GDW5CV
109743ms th_a main.cpp:119 main ] key_to_wif( nathan_private_key ): 5KQwrPbwdL6PhXujxW37FSSQZ1JiwsST4cqQzDeyXtP79zkvFD3
109745ms th_a main.cpp:166 main ] wdata.ws_server: ws://192.168.1.11:8090
109750ms th_a main.cpp:171 main ] wdata.ws_user: wdata.ws_password:
109765ms th_a wallet.cpp:700 load_wallet_file ] Account 1.2.2331 : "aliya" updated on chain
109771ms th_a thread.cpp:95 thread ] name:getline tid:140217568634624
111105ms th_a wallet.cpp:723 save_wallet_file ] saving wallet to file test3
pure virtual method called
terminate called without an active exception
Aborted (core dumped)
james@james-desktop:~/github/graphene/programs/cli_wallet$
1758265ms th_a application.cpp:428 handle_transaction ] Got transaction from network
1758281ms th_a application.cpp:428 handle_transaction ] Got transaction from network
1758299ms th_a application.cpp:428 handle_transaction ] Got transaction from network
1758394ms th_a application.cpp:428 handle_transaction ] Got transaction from network
1758398ms th_a application.cpp:388 handle_block ] Got block #11840 with time 2015-09-19T05:29:18 from network with latency of 393 ms from init6
1758402ms th_a application.cpp:695 get_blockchain_synop ] synopsis: ["00002e364abd058627066531933247506f6702ef","00002e3b1016d465a146cbe56883f153dfd535ea","00002e3d21092a12f05a169774d0d212669d3bc4","00002e3ef9328450f51dd107276d69b6981afcc1"]
1758404ms th_a application.cpp:695 get_blockchain_synop ] synopsis: ["00002e364abd058627066531933247506f6702ef","00002e3c2cbd1af001bec13c5332c92ae130bdad","00002e3f3be98a54f20afd35a9dc43147def829b","00002e40a7bf3ed2a7cae24f26740b5af78d8627"]
1758404ms th_a application.cpp:428 handle_transaction ] Got transaction from network
1758405ms th_a application.cpp:695 get_blockchain_synop ] synopsis: ["00002e364abd058627066531933247506f6702ef","00002e3c2cbd1af001bec13c5332c92ae130bdad","00002e3f3be98a54f20afd35a9dc43147def829b","00002e40a7bf3ed2a7cae24f26740b5af78d8627"]
witness_node: /home/james/github/graphene/libraries/net/node.cpp:2488: void graphene::net::detail::node_impl::on_blockchain_item_ids_inventory_message(graphene::net::peer_connection*, const graphene::net::blockchain_item_ids_inventory_message&): Assertion `originating_peer->last_block_number_delegate_has_seen == _delegate->get_block_number(originating_peer->last_block_delegate_has_seen)' failed.
Aborted (core dumped)
2015-09-19T05:29:18 p2p:process_backlog_of_sync_blocks trigger_fetch_sync_i ] Triggering fetch sync items loop now node.cpp:1083
2015-09-19T05:29:18 p2p:fetch_sync_items_loop fetch_sync_items_loo ] beginning another iteration of the sync items loop node.cpp:1022
2015-09-19T05:29:18 p2p:fetch_sync_items_loop fetch_sync_items_loo ] no sync items to fetch right now, going to sleep node.cpp:1072
2015-09-19T05:29:18 p2p:message read_loop on_blockchain_item_i ] popping item because delegate has already seen it. peer 192.241.198.6:58168's last block the delegate has seen is now 00002e40a7bf3ed2a7cae24f26740b5af78d8627 (actual block #11836, tracked block #11836) node.cpp:2487
Added you riverhead, but it won't be enough to bring you in. Unfortunately I am not a whale. Not even a dolphin.
On a side note, is there a feed script yet? If not what is the syntax to publish a feed manually?
publish_asset_feed(string, string, graphene::chain::price_feed, bool)
I was guessing account_name asset_name price broadcast? but thats not working. Might be because I am not voted in yet. If someone figures it out please post.
get_witness init9
{
"id": "1.6.10",
"witness_account": "1.2.109",
"last_aslot": 17927,
"signing_key": "GPH6aRys1uA71La2EyA1sjLqGzZzwPXc9dm6NCcWARrsDn7Y6EPoc",
"pay_vb": "1.13.6",
"vote_id": "1:9",
"total_votes": "2079167368873",
"url": "",
"total_missed": 65
}
get_witness init10
{
"id": "1.6.11",
"witness_account": "1.2.110",
"last_aslot": 8372,
"signing_key": "GPH5uXNw7r167Dhuf4qMz1BTj74rAeicYCARTifpmrcfFhRZquU8B",
"pay_vb": "1.13.5",
"vote_id": "1:10",
"total_votes": "872119491831",
"url": "",
"total_missed": 64
}
Added you riverhead, but it won't be enough to bring you in. Unfortunately I am not a whale. Not even a dolphin.
On a side note, is there a feed script yet? If not what is the syntax to publish a feed manually?
publish_asset_feed(string, string, graphene::chain::price_feed, bool)
I was guessing account_name asset_name price broadcast? but thats not working. Might be because I am not voted in yet. If someone figures it out please post.
#define GRAPHENE_PRICE_FEED_FIELDS (settlement_price)(maintenance_collateral_ratio)(maximum_short_squeeze_ratio) \
(core_exchange_rate)
I figured out the syntax of publishing a price feed, but the transaction got refused by the chain.Added you riverhead, but it won't be enough to bring you in. Unfortunately I am not a whale. Not even a dolphin.
On a side note, is there a feed script yet? If not what is the syntax to publish a feed manually?
publish_asset_feed(string, string, graphene::chain::price_feed, bool)
I was guessing account_name asset_name price broadcast? but thats not working. Might be because I am not voted in yet. If someone figures it out please post.
See https://github.com/cryptonomex/graphene/blob/93c72b05951ad2fd0f1a07b702773e9a905d8adc/libraries/wallet/include/graphene/wallet/wallet.hpp#L916-L935
Don't know the syntax of struct "price_feed" though..Code: [Select]#define GRAPHENE_PRICE_FEED_FIELDS (settlement_price)(maintenance_collateral_ratio)(maximum_short_squeeze_ratio) \
(core_exchange_rate)
Is it still cool to set-up a witness on a cubox at home?My witness is running at home.
Or would a fresh VPS be better. Or both?
get_witness spartako
{
"id": "1.6.4232",
"witness_account": "1.2.72822",
....
}
get_witness xeldal
{
"id": "1.6.4949",
"witness_account": "1.2.86459",
publish_asset_feed in.abit ABITUSDA {"settlement_price":{"base":{"amount":50000,"asset_id":"1.3.662"},"quote":{"amount":1000000000,"asset_id":"1.3.0"}},"core_exchange_rate":{"base":{"amount":100000,"asset_id":"1.3.0"},"quote":{"amount":10000,"asset_id":"1.3.662"}}} true
I still have 0 votes.
I've changed my witness name to just xeldal ,thinking that may be why.
Please update your votes for this witness. ThankyouCode: [Select]get_witness xeldal
{
"id": "1.6.4949",
"witness_account": "1.2.86459",
vote_for_witness spartako init0 true true
I obtain:....
"new_options": {
"memo_key": "GPH5mgup8evDqMnT86L7scVebRYDC2fwAWmygPEUL43LjstQegYCC",
"voting_account": "1.2.5",
"num_witness": 0,
"num_committee": 0,
"votes": [
"1:0",
"1:623"
],
"extensions": []
....
get_witness init0
{
"id": "1.6.1",
"witness_account": "1.2.100",
"last_aslot": 22268,
"signing_key": "GPH6gBqcGKgkVQvt7XZtYN9z3QHdWFCGKB2oiNvpMkXQsMqZB1YWi",
"pay_vb": "1.13.12",
"vote_id": "1:0",
"total_votes": "2074182939478",
"url": "bitshares.org",
"total_missed": 71
}
get_witness bitcube
{
"id": "1.6.624",
"witness_account": "1.2.8206",
"last_aslot": 0,
"signing_key": "GPH7qbi1TEAFGjsCTNQoecnGkcMr3RV6ya5yc8GbXyXCV1adQtFsn",
"vote_id": "1:623",
"total_votes": 0,
"url": "",
"total_missed": 0
}
vote_for_witness spartako init1 true true
....
"new_options": {
"memo_key": "GPH5mgup8evDqMnT86L7scVebRYDC2fwAWmygPEUL43LjstQegYCC",
"voting_account": "1.2.5",
"num_witness": 0,
"num_committee": 0,
"votes": [
"1:1",
"1:623"
],
....
...
"new_options": {
"memo_key": "GPH5mgup8evDqMnT86L7scVebRYDC2fwAWmygPEUL43LjstQegYCC",
"voting_account": "1.2.5",
"num_witness": 0,
"num_committee": 0,
"votes": [
"1:2",
"1:623"
],
"extensions": []
},
....
I am really confused ???There is a bug with voting in cli-wallet. It's only possible to vote for one witness via "vote_for_witness" -- the last one.
bitcube
calabiyau
dele-puppy
delegate-1.lafona
delegate-clayop
fox
in.abit
init1
init2
init3
init4
init5
init6
init7
init8
init9
riverhead
wackou
xeldal
I am really confused ???There is a bug with voting in cli-wallet. It's only possible to vote for one witness via "vote_for_witness" -- the last one.
You can try vote-proxy or try GUI if you want to vote for more witnesses.
The "total_votes: 0" is also buggy. It will show non-zero only if your witness have ever been voted in.
We need to vote for more witnesses than the 11 that are currently in. The number of active witnesses is determined by the average number that we are all voting for. I am assuming this is based upon stake voting.
I am voting in gui, so I cant help with the cli syntax, but you can always set me as a proxy. I am currently voting for
set_voting_proxy spartako dele-puppy true
bitcube
calabiyau
dele-puppy
delegate-1.lafona
delegate-clayop
fox
in.abit
init0
init1
init2
init3
init4
init5
init6
init7
init8
init9
init10
init11
riverhead
wackou
xeldal
spartako
Please do not vote out init0 -> init11 these are our nodes for testing in this test network and they are on separate machines / processes and not all on one machine.How to increase the total number of witnesses? Do we need a proposal?
Please DO vote for init0->init11 AND everyone on this thread so we can increase the total number of witnesses.
please whitelist me for ABITUSDA.Thanks. dele-puppy is whitelisted.
In the meantime can someone send me (xeldal) 11 - 20 CORE so I can "set_voting_proxy"Transferred.
Thanks
Code: [Select]...
spartako
sorry thought I had you in there. I'll add you now.
We need to vote for more witnesses than the 11 that are currently in. The number of active witnesses is determined by the average number that we are all voting for. I am assuming this is based upon stake voting.
I am voting in gui, so I cant help with the cli syntax, but you can always set me as a proxy. I am currently voting for
I just set you as a proxy:Code: [Select]set_voting_proxy spartako dele-puppy true
Please add "spartako" (1.6.4232) to the list, thanks!
sorry thought I had you in there. I'll add you now.
I thought it should besorry thought I had you in there. I'll add you now.
We need to vote for more witnesses than the 11 that are currently in. The number of active witnesses is determined by the average number that we are all voting for. I am assuming this is based upon stake voting.
I am voting in gui, so I cant help with the cli syntax, but you can always set me as a proxy. I am currently voting for
I just set you as a proxy:Code: [Select]set_voting_proxy spartako dele-puppy true
Please add "spartako" (1.6.4232) to the list, thanks!
set_voting_proxy spartako puppies true
I thought it should beCode: [Select]set_voting_proxy spartako puppies true
Please do not vote out init0 -> init11 these are our nodes for testing in this test network and they are on separate machines / processes and not all on one machine.How to increase the total number of witnesses? Do we need a proposal?
Please DO vote for init0->init11 AND everyone on this thread so we can increase the total number of witnesses.
2015-09-19T16:00:00 th_a:invoke handle_block operator() ] stake_account 1.2.22404 voting_stake 2494583263 num_witness 20
db_maint.cpp:437
2015-09-19T16:00:00 th_a:invoke handle_block operator() ] stake_account 1.2.21605 voting_stake 857123053259 num_witness 13
db_maint.cpp:437
2015-09-19T16:00:02 th_a:invoke handle_block operator() ] stake_account 1.2.201 voting_stake 1217059886219 num_witness 17
db_maint.cpp:437
2015-09-19T16:00:06 th_a:invoke handle_block operator() ] stake_account 1.2.63448 voting_stake 4982408107 num_witness 20
db_maint.cpp:437
2015-09-19T16:00:07 th_a:invoke handle_block operator() ] stake_account 1.2.72822 voting_stake 63991967993 num_witness 20
db_maint.cpp:437
2015-09-19T16:00:09 th_a:invoke handle_block operator() ] stake_account 1.2.86459 voting_stake 8796602 num_witness 20
db_maint.cpp:437
2015-09-19T16:00:00 th_a:invoke handle_block operator() ] stake_account 1.2.22404 voting_stake 2494583263 num_witness 20
db_maint.cpp:437
2015-09-19T16:00:00 th_a:invoke handle_block operator() ] stake_account 1.2.21605 voting_stake 857123053259 num_witness 13
db_maint.cpp:437
2015-09-19T16:00:01 th_a:invoke handle_block operator() ] stake_account 1.2.17357 voting_stake 4822 num_witness 0
db_maint.cpp:437
2015-09-19T16:00:01 th_a:invoke handle_block operator() ] stake_account 1.2.14634 voting_stake 993180513 num_witness 0
db_maint.cpp:437
2015-09-19T16:00:01 th_a:invoke handle_block operator() ] stake_account 1.2.13486 voting_stake 2000068356 num_witness 0
db_maint.cpp:437
2015-09-19T16:00:02 th_a:invoke handle_block operator() ] stake_account 1.2.106 voting_stake 100000 num_witness 0
db_maint.cpp:437
2015-09-19T16:00:02 th_a:invoke handle_block operator() ] stake_account 1.2.108 voting_stake 99800000 num_witness 0
db_maint.cpp:437
2015-09-19T16:00:02 th_a:invoke handle_block operator() ] stake_account 1.2.109 voting_stake 99800000 num_witness 0
db_maint.cpp:437
2015-09-19T16:00:02 th_a:invoke handle_block operator() ] stake_account 1.2.110 voting_stake 99800000 num_witness 0
db_maint.cpp:437
2015-09-19T16:00:02 th_a:invoke handle_block operator() ] stake_account 1.2.100 voting_stake 69999800000 num_witness 0
db_maint.cpp:437
2015-09-19T16:00:02 th_a:invoke handle_block operator() ] stake_account 1.2.101 voting_stake 109800000 num_witness 0
db_maint.cpp:437
2015-09-19T16:00:02 th_a:invoke handle_block operator() ] stake_account 1.2.201 voting_stake 1217059886219 num_witness 17
db_maint.cpp:437
2015-09-19T16:00:02 th_a:invoke handle_block operator() ] stake_account 1.2.2331 voting_stake 6998869201 num_witness 0
db_maint.cpp:437
2015-09-19T16:00:02 th_a:invoke handle_block operator() ] stake_account 1.2.3089 voting_stake 167071154666 num_witness 0
db_maint.cpp:437
2015-09-19T16:00:03 th_a:invoke handle_block operator() ] stake_account 1.2.38993 voting_stake 8412165358 num_witness 0
db_maint.cpp:437
2015-09-19T16:00:03 th_a:invoke handle_block operator() ] stake_account 1.2.41123 voting_stake 338984266 num_witness 0
db_maint.cpp:437
2015-09-19T16:00:03 th_a:invoke handle_block operator() ] stake_account 1.2.47247 voting_stake 7309344543864 num_witness 0
db_maint.cpp:437
2015-09-19T16:00:04 th_a:invoke handle_block operator() ] stake_account 1.2.30566 voting_stake 10000000 num_witness 0
db_maint.cpp:437
2015-09-19T16:00:05 th_a:invoke handle_block operator() ] stake_account 1.2.53724 voting_stake 10000000 num_witness 0
db_maint.cpp:437
2015-09-19T16:00:06 th_a:invoke handle_block operator() ] stake_account 1.2.63448 voting_stake 4982408107 num_witness 20
db_maint.cpp:437
2015-09-19T16:00:07 th_a:invoke handle_block operator() ] stake_account 1.2.72822 voting_stake 63991967993 num_witness 20
db_maint.cpp:437
2015-09-19T16:00:08 th_a:invoke handle_block operator() ] stake_account 1.2.83349 voting_stake 18122731729 num_witness 0
db_maint.cpp:437
2015-09-19T16:00:09 th_a:invoke handle_block operator() ] stake_account 1.2.86459 voting_stake 8796602 num_witness 20
db_maint.cpp:437
2015-09-19T16:00:09 th_a:invoke handle_block operator() ] stake_account 1.2.90134 voting_stake 29695789064 num_witness 0
db_maint.cpp:437
get_witness roadscape
{
"id": "1.6.5249",
"witness_account": "1.2.67429",
"last_aslot": 0,
"signing_key": "GPH8LkpAcZX1wpzh69or1WG62PYvgSpuUjL3YShR9ChA5XDYVh3zW",
"vote_id": "1:5270",
"total_votes": 0,
"url": "https://github.com/roadscape",
"total_missed": 0
}
2666068ms th_a application.cpp:695 get_blockchain_synop ] synopsis: ["000021f029ef1ca6c188b018a875452251581b15"]
2666483ms th_a application.cpp:695 get_blockchain_synop ] synopsis: ["00002282c22d98bcb469328a6520f8ea217cd6cd"]
2667707ms th_a application.cpp:695 get_blockchain_synop ] synopsis: ["000025622acfbcb5f395ab8b83cf64c972a33307","00002568780f577938c1d6cef4a48518fdf91a0a","0000256ba7c4b4e1df57f66669a9b10c9a0e8fb5","0000256d10479bd4d2ea39820f5a448187e266f6"]
2667723ms th_a application.cpp:695 get_blockchain_synop ] synopsis: ["000025622acfbcb5f395ab8b83cf64c972a33307","00002568780f577938c1d6cef4a48518fdf91a0a","0000256ba7c4b4e1df57f66669a9b10c9a0e8fb5","0000256d10479bd4d2ea39820f5a448187e266f6"]
2667781ms th_a application.cpp:695 get_blockchain_synop ] synopsis: ["000025622acfbcb5f395ab8b83cf64c972a33307","00002568780f577938c1d6cef4a48518fdf91a0a","0000256ba7c4b4e1df57f66669a9b10c9a0e8fb5","0000256d10479bd4d2ea39820f5a448187e266f6"]
2668098ms th_a application.cpp:695 get_blockchain_synop ] synopsis: ["0000264aec280792e37cc3869300babc686c4ca5","000026500ea3aaf8c2a217199ff8cdf50be63c64","0000265313d7b3710423bd3512efd060a702c5dd","00002655bd9279f769d00e1859f3de4ccd64c9c2"]
2668401ms th_a application.cpp:388 handle_block ] Got block #10000 with time 2015-09-19T03:57:00 from network with latency of 42448404 ms from in.abit
2668594ms th_a application.cpp:695 get_blockchain_synop ] synopsis: ["00002737d801c5d34b19396459eac2f2fe7b9d4e","0000273d16d8bf5a9d8c2875db6f7ac9151e8623","000027401bd8cee3da35e4d3aacf2753724e2460","00002742cdb021792a5c60c529b416321b2c0e42"]
2669240ms th_a application.cpp:695 get_blockchain_synop ] synopsis: ["00002882f1df81160013ce49c97826e38bdfc267","0000288871ded5ef996932d1be1458f3a37c1219","0000288b99fae8170cd675e087c1388beb2bc3c6","0000288d7f0f59de985c3fe38881e435f90449ef"]
2670470ms th_a application.cpp:695 get_blockchain_synop ] synopsis: ["000029ba24508e67e34e8e1c46bc20bfc0e2902f"]
2671314ms th_a application.cpp:695 get_blockchain_synop ] synopsis: ["00002a889920c1ff645878ec2e7296a20b765c8f"]
2681172ms th_a application.cpp:695 get_blockchain_synop ] synopsis: ["0000320133034eff6b2ecb099691fb1b87bfc27b","000032077334ed5905814efa490b5ee5148cc987","0000320af4e42828469a2a9d5fa1d611f588a911","0000320c2f0c947368d0e82fbaa1f724bdd3aac7"]
2681621ms th_a application.cpp:695 get_blockchain_synop ] synopsis: ["0000329aaf65eb6736c8970c945cce58f0f4b6be"]
2682480ms th_a application.cpp:695 get_blockchain_synop ] synopsis: ["000033b6e9a19ad7990a791f7c140691b168410f","000033bc9dc6e3e6024adf03615577ce9b7f699f","000033bf21c91a136bea1ee3d96e062f77ac65aa","000033c16880f71ee9570af4ef46cd4cb932f0e0"]
2683239ms th_a application.cpp:695 get_blockchain_synop ] synopsis: ["00002119231e1df277239782c45f90259488fe4e"]
2687651ms th_a application.cpp:695 get_blockchain_synop ] synopsis: ["00003b30aa266b17cbb3a28be80f860914cafeb0","00003b367a77572e16e483ceb1da514bef3f24e6","00003b39842e23b90f94248fb3deff3143136f1e","00003b3b91b39257324e2a0963cc47f280891d72"]
2689399ms th_a application.cpp:415 handle_block ] Error when pushing block:
10 assert_exception: Assert Exception
item->num > std::max<int64_t>( 0, int64_t(_head->num) - (_max_size) ): attempting to push a block that is too old
{"item->num":15530,"head":15746,"max_size":14}
th_a fork_database.cpp:71 _push_block
{"new_block":{"previous":"00003ca9a64aaab511688859835d1012c60d3358","timestamp":"2015-09-19T08:34:42","witness":"1.6.9","transaction_merkle_root":"0000000000000000000000000000000000000000","extensions":[],"witness_signature":"1f183a56a6c97eec1dacdc22cbbcc6879d2e04869ac8371ba91b50bdcc3f6604b40b3b4c6b6377afa8b096167a7bf2747d41423302c71bba97d0d875399a07debe","transactions":[]}}
th_a db_block.cpp:195 _push_block
2692265ms th_a application.cpp:695 get_blockchain_synop ] synopsis: ["00002d3cc0a9b72c6e182894336d43dd6ac9cda5"]
2693143ms th_a application.cpp:695 get_blockchain_synop ] synopsis: ["00002f11f0b8a623ce90e94de9c0983134ef378b"]
2694038ms th_a application.cpp:695 get_blockchain_synop ] synopsis: ["00002d3cc0a9b72c6e182894336d43dd6ac9cda5"]
Is it a bug, or a feature?You need committee members voted into the active committe that want more witnesses ..
With current implementation, the chain will take more than 11 witnesses only if more than 50% of voting stakes ALL vote for more than 11 witnesses.
Now most of voting stakes (from my log 76134065.91839 out of 97590672.87282) are voting for 1 witness, with current implementation, the chain take minimum number of witnesses, it's 11.
The biggest whale in the testnet, 1.2.47247, name 'llc', who owns 73M CORE, please help test voting -- don't vote for yourself only.
I ever thought of this as well, but after check the global parameter is 1001 indeed. So it's another issue.
You need committee members voted into the active committe that want more witnesses ..
- create some committee accounts
- set their witness amount parameter to something higher than 11
- have them vote into active committee
afaik the amount of active witnesses will increase to the MEDIAN amount of what the (active) committee members want
Would lobe to help out but oli am mobile over the weekend ..
Good luck
get_global_properties
{
...
"maximum_witness_count": 1001,
...
}
"active_witnesses": [
"1.6.1",
"1.6.2",
"1.6.3",
"1.6.4",
"1.6.5",
"1.6.6",
"1.6.7",
"1.6.1538",
"1.6.2104",
"1.6.5247",
"1.6.5248"
],
bitcube
calabiyau
dele-puppy
delegate-1.lafona
delegate-clayop
fox
in.abit
init0
init1
init10
init2
init3
init4
init5
init6
init7
init8
init9
riverhead
roadscape
spartako
wackou
xeldal
th_a fork_database.cpp:79 _push_block
{"new_block":{"previous":"000063f6db1440c2b25130cd53dfb59ba393476c","timestamp":"2015-09-19T17:44:45","witness":"1.6.2","transaction_merkle_root":"0000000000000000000000000000000000000000","extensions":[],"witness_signature":"1f356d6f0f9cd1f5b0c3d42f52f6eb46df93af7518175018bc91a4dfe06a395b4541c65ddbc7ec81499868cb52508afd27c6d8348029fc910b7aa3537448464924","transactions":[]}}
th_a db_block.cpp:195 _push_block
2687000ms th_a witness.cpp:182 block_production_loo ] Not producing block because it isn't my turn
2687510ms th_a application.cpp:695 get_blockchain_synop ] synopsis: ["000063c177ee3dada43f55223bb18fdd7d9b30e7","000063c7f469f342f415e7dc947e7ab3f50bd335","000063ca5952fd63f7d343ae9243e48e80863d54","000063cc082deb3db5bf5face1691e24ba7f8460"]
2688000ms th_a witness.cpp:182 block_production_loo ] Not producing block because it isn't my turn
2688773ms th_a application.cpp:388 handle_block ] Got block #25592 with time 2015-09-19T17:44:48 from network with latency of 780 ms from delegate-clayop
2688773ms th_a fork_database.cpp:57 push_block ] Pushing block to fork database that failed to link: 000063f80322c81f8b12de9b5993c1c955d54a8c, 25592
2688773ms th_a fork_database.cpp:58 push_block ] Head: 25548, 000063cc082deb3db5bf5face1691e24ba7f8460
2688773ms th_a application.cpp:415 handle_block ] Error when pushing block:
3080000 unlinkable_block_exception: unlinkable block
block does not link to known chain
{}
th_a fork_database.cpp:79 _push_block
{"new_block":{"previous":"000063f77cfaaecfdd6756b2f5833ec19cc3fc20","timestamp":"2015-09-19T17:44:48","witness":"1.6.1538","transaction_merkle_root":"0000000000000000000000000000000000000000","extensions":[],"witness_signature":"1f34c399fc3bc711109d9412adb172f0258f1dd6612eb9c3e24a72bd87d8aecea54915fb219f1de70572b3b1abd13b5a6dc84191806bd98083ba59f42de76354b1","transactions":[]}}
th_a db_block.cpp:195 _push_block
2689000ms th_a witness.cpp:182 block_production_loo ] Not producing block because it isn't my turn
2689111ms th_a application.cpp:388 handle_block ] Got block #25592 with time 2015-09-19T17:44:48 from network with latency of 1118 ms from delegate-clayop
2689111ms th_a fork_database.cpp:57 push_block ] Pushing block to fork database that failed to link: 000063f80322c81f8b12de9b5993c1c955d54a8c, 25592
2689111ms th_a fork_database.cpp:58 push_block ] Head: 25548, 000063cc082deb3db5bf5face1691e24ba7f8460
2689112ms th_a application.cpp:415 handle_block ] Error when pushing block:
3080000 unlinkable_block_exception: unlinkable block
block does not link to known chain
{}
th_a fork_database.cpp:79 _push_block
{"new_block":{"previous":"000063f77cfaaecfdd6756b2f5833ec19cc3fc20","timestamp":"2015-09-19T17:44:48","witness":"1.6.1538","transaction_merkle_root":"0000000000000000000000000000000000000000","extensions":[],"witness_signature":"1f34c399fc3bc711109d9412adb172f0258f1dd6612eb9c3e24a72bd87d8aecea54915fb219f1de70572b3b1abd13b5a6dc84191806bd98083ba59f42de76354b1","transactions":[]}}
th_a db_block.cpp:195 _push_block
2689490ms th_a application.cpp:388 handle_block ] Got block #25592 with time 2015-09-19T17:44:48 from network with latency of 1497 ms from delegate-clayop
2689491ms th_a fork_database.cpp:57 push_block ] Pushing block to fork database that failed to link: 000063f80322c81f8b12de9b5993c1c955d54a8c, 25592
2689491ms th_a fork_database.cpp:58 push_block ] Head: 25548, 000063cc082deb3db5bf5face1691e24ba7f8460
2689491ms th_a application.cpp:415 handle_block ] Error when pushing block:
3080000 unlinkable_block_exception: unlinkable block
block does not link to known chain
{}
th_a fork_database.cpp:79 _push_block
{"new_block":{"previous":"000063f77cfaaecfdd6756b2f5833ec19cc3fc20","timestamp":"2015-09-19T17:44:48","witness":"1.6.1538","transaction_merkle_root":"0000000000000000000000000000000000000000","extensions":[],"witness_signature":"1f34c399fc3bc711109d9412adb172f0258f1dd6612eb9c3e24a72bd87d8aecea54915fb219f1de70572b3b1abd13b5a6dc84191806bd98083ba59f42de76354b1","transactions":[]}}
th_a db_block.cpp:195 _push_block
2690000ms th_a witness.cpp:182 block_production_loo ] Not producing block because it isn't my turn
2690098ms th_a application.cpp:388 handle_block ] Got block #25592 with time 2015-09-19T17:44:48 from network with latency of 2104 ms from delegate-clayop
2690098ms th_a fork_database.cpp:57 push_block ] Pushing block to fork database that failed to link: 000063f80322c81f8b12de9b5993c1c955d54a8c, 25592
2690098ms th_a fork_database.cpp:58 push_block ] Head: 25548, 000063cc082deb3db5bf5face1691e24ba7f8460
2690098ms th_a application.cpp:415 handle_block ] Error when pushing block:
3080000 unlinkable_block_exception: unlinkable block
block does not link to known chain
{}
th_a fork_database.cpp:79 _push_block
{"new_block":{"previous":"000063f77cfaaecfdd6756b2f5833ec19cc3fc20","timestamp":"2015-09-19T17:44:48","witness":"1.6.1538","transaction_merkle_root":"0000000000000000000000000000000000000000","extensions":[],"witness_signature":"1f34c399fc3bc711109d9412adb172f0258f1dd6612eb9c3e24a72bd87d8aecea54915fb219f1de70572b3b1abd13b5a6dc84191806bd98083ba59f42de76354b1","transactions":[]}}
th_a db_block.cpp:195 _push_block
2690209ms th_a application.cpp:695 get_blockchain_synop ] synopsis: ["000063c26bf08ebafa5593807c31ba77bcc06b81","000063c8f3216b57009235878d541207b150c29a","000063cb4385c7f6965a107a22006874c09afef9","000063cdd010890862beaa64123e32edbdd73f14"]
2690217ms th_a application.cpp:695 get_blockchain_synop ] synopsis: ["000063c26bf08ebafa5593807c31ba77bcc06b81","000063c8f3216b57009235878d541207b150c29a","000063cb4385c7f6965a107a22006874c09afef9","000063cdd010890862beaa64123e32edbdd73f14"]
2690223ms th_a application.cpp:695 get_blockchain_synop ] synopsis: ["000063c26bf08ebafa5593807c31ba77bcc06b81","000063c8f3216b57009235878d541207b150c29a","000063cb4385c7f6965a107a22006874c09afef9","000063cdd010890862beaa64123e32edbdd73f14"]
2690232ms th_a application.cpp:695 get_blockchain_synop ] synopsis: ["000063c39be97087c9ce5c8b343e059e809ba8ab","000063c9260aa4abe461ecdf1573b3025fcad188","000063cc082deb3db5bf5face1691e24ba7f8460","000063ce70a9e0e47eaaa80a44e9537fc4cdbc85"]
2690290ms th_a application.cpp:695 get_blockchain_synop ] synopsis: ["000063c4c1508ce08b8cd7f183b4dcea85777bb9","000063ca5952fd63f7d343ae9243e48e80863d54","000063cdd010890862beaa64123e32edbdd73f14","000063cf50a023252b8084621ace2cafa3d70886"]
2690291ms th_a application.cpp:695 get_blockchain_synop ] synopsis: ["000063c4c1508ce08b8cd7f183b4dcea85777bb9","000063ca5952fd63f7d343ae9243e48e80863d54","000063cdd010890862beaa64123e32edbdd73f14","000063cf50a023252b8084621ace2cafa3d70886"]
(57 repeating lines snipped)
2690471ms th_a application.cpp:695 get_blockchain_synop ] synopsis: ["000063c5c82a119bcff707293aa756e7615e2840","000063cb4385c7f6965a107a22006874c09afef9","000063ce70a9e0e47eaaa80a44e9537fc4cdbc85","000063d037b5440a9c184de7169f50b4e2e395ac"]
2690471ms th_a application.cpp:695 get_blockchain_synop ] synopsis: ["000063c5c82a119bcff707293aa756e7615e2840","000063cb4385c7f6965a107a22006874c09afef9","000063ce70a9e0e47eaaa80a44e9537fc4cdbc85","000063d037b5440a9c184de7169f50b4e2e395ac"]
2690474ms th_a application.cpp:695 get_blockchain_synop ] synopsis: ["000063c5c82a119bcff707293aa756e7615e2840","000063cb4385c7f6965a107a22006874c09afef9","000063ce70a9e0e47eaaa80a44e9537fc4cdbc85","000063d037b5440a9c184de7169f50b4e2e395ac"]
2690474ms th_a application.cpp:695 get_blockchain_synop ] synopsis: ["000063c5c82a119bcff707293aa756e7615e2840","000063cb4385c7f6965a107a22006874c09afef9","000063ce70a9e0e47eaaa80a44e9537fc4cdbc85","000063d037b5440a9c184de7169f50b4e2e395ac"]
2690474ms th_a application.cpp:695 get_blockchain_synop ] synopsis: ["000063c5c82a119bcff707293aa756e7615e2840","000063cb4385c7f6965a107a22006874c09afef9","000063ce70a9e0e47eaaa80a44e9537fc4cdbc85","000063d037b5440a9c184de7169f50b4e2e395ac"]
2690478ms th_a application.cpp:695 get_blockchain_synop ] synopsis: ["000063c5c82a119bcff707293aa756e7615e2840","000063cb4385c7f6965a107a22006874c09afef9","000063ce70a9e0e47eaaa80a44e9537fc4cdbc85","000063d037b5440a9c184de7169f50b4e2e395ac"]
2690478ms th_a application.cpp:695 get_blockchain_synop ] synopsis: ["000063c5c82a119bcff707293aa756e7615e2840","000063cb4385c7f6965a107a22006874c09afef9","000063ce70a9e0e47eaaa80a44e9537fc4cdbc85","000063d037b5440a9c184de7169f50b4e2e395ac"]
2690478ms th_a application.cpp:695 get_blockchain_synop ] synopsis: ["000063c5c82a119bcff707293aa756e7615e2840","000063cb4385c7f6965a107a22006874c09afef9","000063ce70a9e0e47eaaa80a44e9537fc4cdbc85","000063d037b5440a9c184de7169f50b4e2e395ac"]
2690479ms th_a application.cpp:695 get_blockchain_synop ] synopsis: ["000063c5c82a119bcff707293aa756e7615e2840","000063cb4385c7f6965a107a22006874c09afef9","000063ce70a9e0e47eaaa80a44e9537fc4cdbc85","000063d037b5440a9c184de7169f50b4e2e395ac"]
2690479ms th_a application.cpp:695 get_blockchain_synop ] synopsis: ["000063c5c82a119bcff707293aa756e7615e2840","000063cb4385c7f6965a107a22006874c09afef9","000063ce70a9e0e47eaaa80a44e9537fc4cdbc85","000063d037b5440a9c184de7169f50b4e2e395ac"]
2690479ms th_a application.cpp:695 get_blockchain_synop ] synopsis: ["000063c5c82a119bcff707293aa756e7615e2840","000063cb4385c7f6965a107a22006874c09afef9","000063ce70a9e0e47eaaa80a44e9537fc4cdbc85","000063d037b5440a9c184de7169f50b4e2e395ac"]
2690480ms th_a application.cpp:695 get_blockchain_synop ] synopsis: ["000063c5c82a119bcff707293aa756e7615e2840","000063cb4385c7f6965a107a22006874c09afef9","000063ce70a9e0e47eaaa80a44e9537fc4cdbc85","000063d037b5440a9c184de7169f50b4e2e395ac"]
2690480ms th_a application.cpp:695 get_blockchain_synop ] synopsis: ["000063c5c82a119bcff707293aa756e7615e2840","000063cb4385c7f6965a107a22006874c09afef9","000063ce70a9e0e47eaaa80a44e9537fc4cdbc85","000063d037b5440a9c184de7169f50b4e2e395ac"]
2690480ms th_a application.cpp:695 get_blockchain_synop ] synopsis: ["000063c5c82a119bcff707293aa756e7615e2840","000063cb4385c7f6965a107a22006874c09afef9","000063ce70a9e0e47eaaa80a44e9537fc4cdbc85","000063d037b5440a9c184de7169f50b4e2e395ac"]
2690481ms th_a application.cpp:695 get_blockchain_synop ] synopsis: ["000063c5c82a119bcff707293aa756e7615e2840","000063cb4385c7f6965a107a22006874c09afef9","000063ce70a9e0e47eaaa80a44e9537fc4cdbc85","000063d037b5440a9c184de7169f50b4e2e395ac"]
2690481ms th_a application.cpp:695 get_blockchain_synop ] synopsis: ["000063c5c82a119bcff707293aa756e7615e2840","000063cb4385c7f6965a107a22006874c09afef9","000063ce70a9e0e47eaaa80a44e9537fc4cdbc85","000063d037b5440a9c184de7169f50b4e2e395ac"]
2690481ms th_a application.cpp:695 get_blockchain_synop ] synopsis: ["000063c5c82a119bcff707293aa756e7615e2840","000063cb4385c7f6965a107a22006874c09afef9","000063ce70a9e0e47eaaa80a44e9537fc4cdbc85","000063d037b5440a9c184de7169f50b4e2e395ac"]
2690482ms th_a application.cpp:695 get_blockchain_synop ] synopsis: ["000063c5c82a119bcff707293aa756e7615e2840","000063cb4385c7f6965a107a22006874c09afef9","000063ce70a9e0e47eaaa80a44e9537fc4cdbc85","000063d037b5440a9c184de7169f50b4e2e395ac"]
2690482ms th_a application.cpp:695 get_blockchain_synop ] synopsis: ["000063c5c82a119bcff707293aa756e7615e2840","000063cb4385c7f6965a107a22006874c09afef9","000063ce70a9e0e47eaaa80a44e9537fc4cdbc85","000063d037b5440a9c184de7169f50b4e2e395ac"]
2690482ms th_a application.cpp:695 get_blockchain_synop ] synopsis: ["000063c5c82a119bcff707293aa756e7615e2840","000063cb4385c7f6965a107a22006874c09afef9","000063ce70a9e0e47eaaa80a44e9537fc4cdbc85","000063d037b5440a9c184de7169f50b4e2e395ac"]
2690483ms th_a application.cpp:695 get_blockchain_synop ] synopsis: ["000063c5c82a119bcff707293aa756e7615e2840","000063cb4385c7f6965a107a22006874c09afef9","000063ce70a9e0e47eaaa80a44e9537fc4cdbc85","000063d037b5440a9c184de7169f50b4e2e395ac"]
2690483ms th_a application.cpp:695 get_blockchain_synop ] synopsis: ["000063c5c82a119bcff707293aa756e7615e2840","000063cb4385c7f6965a107a22006874c09afef9","000063ce70a9e0e47eaaa80a44e9537fc4cdbc85","000063d037b5440a9c184de7169f50b4e2e395ac"]
2690483ms th_a application.cpp:695 get_blockchain_synop ] synopsis: ["000063c5c82a119bcff707293aa756e7615e2840","000063cb4385c7f6965a107a22006874c09afef9","000063ce70a9e0e47eaaa80a44e9537fc4cdbc85","000063d037b5440a9c184de7169f50b4e2e395ac"]
2690484ms th_a application.cpp:695 get_blockchain_synop ] synopsis: ["000063c5c82a119bcff707293aa756e7615e2840","000063cb4385c7f6965a107a22006874c09afef9","000063ce70a9e0e47eaaa80a44e9537fc4cdbc85","000063d037b5440a9c184de7169f50b4e2e395ac"]
2690484ms th_a application.cpp:695 get_blockchain_synop ] synopsis: ["000063c5c82a119bcff707293aa756e7615e2840","000063cb4385c7f6965a107a22006874c09afef9","000063ce70a9e0e47eaaa80a44e9537fc4cdbc85","000063d037b5440a9c184de7169f50b4e2e395ac"]
2690485ms th_a application.cpp:695 get_blockchain_synop ] synopsis: ["000063c5c82a119bcff707293aa756e7615e2840","000063cb4385c7f6965a107a22006874c09afef9","000063ce70a9e0e47eaaa80a44e9537fc4cdbc85","000063d037b5440a9c184de7169f50b4e2e395ac"]
2691000ms th_a witness.cpp:182 block_production_loo ] Not producing block because it isn't my turn
2691769ms th_a application.cpp:695 get_blockchain_synop ] synopsis: ["000063dfa50edb6fee62b24771504d3cb6efaa23"]
2691789ms th_a application.cpp:695 get_blockchain_synop ] synopsis: ["000063dfa50edb6fee62b24771504d3cb6efaa23"]
2691824ms th_a application.cpp:695 get_blockchain_synop ] synopsis: ["000063dfa50edb6fee62b24771504d3cb6efaa23"]
2691824ms th_a application.cpp:695 get_blockchain_synop ] synopsis: ["000063dfa50edb6fee62b24771504d3cb6efaa23"]
2691834ms th_a application.cpp:695 get_blockchain_synop ] synopsis: ["000063dfa50edb6fee62b24771504d3cb6efaa23"]
2691837ms th_a application.cpp:695 get_blockchain_synop ] synopsis: ["000063dfa50edb6fee62b24771504d3cb6efaa23"]
2691956ms th_a application.cpp:695 get_blockchain_synop ] synopsis: ["000063edc2e9e068f5f45ad848fe5f5c0fd82944","000063f37425282d452ecccbeda8ffdb82e1efbb","000063f6db1440c2b25130cd53dfb59ba393476c","000063f80322c81f8b12de9b5993c1c955d54a8c"]
2692000ms th_a witness.cpp:182 block_production_loo ] Not producing block because it isn't my turn
2692199ms th_a application.cpp:695 get_blockchain_synop ] synopsis: ["000063edc2e9e068f5f45ad848fe5f5c0fd82944","000063f37425282d452ecccbeda8ffdb82e1efbb","000063f6db1440c2b25130cd53dfb59ba393476c","000063f80322c81f8b12de9b5993c1c955d54a8c"]
2692315ms th_a application.cpp:695 get_blockchain_synop ] synopsis: ["000063edc2e9e068f5f45ad848fe5f5c0fd82944","000063f37425282d452ecccbeda8ffdb82e1efbb","000063f6db1440c2b25130cd53dfb59ba393476c","000063f80322c81f8b12de9b5993c1c955d54a8c"]
2692427ms th_a application.cpp:695 get_blockchain_synop ] synopsis: ["000063edc2e9e068f5f45ad848fe5f5c0fd82944","000063f37425282d452ecccbeda8ffdb82e1efbb","000063f6db1440c2b25130cd53dfb59ba393476c","000063f80322c81f8b12de9b5993c1c955d54a8c"]
2692762ms th_a application.cpp:388 handle_block ] Got block #25593 with time 2015-09-19T17:44:51 from network with latency of 1768 ms from init0
2692864ms th_a application.cpp:695 get_blockchain_synop ] synopsis: ["000063ee628e421e0ceb897f5f76ffb14122445e","000063f4e9ee967ffda64febd3ac53182cda02df","000063f77cfaaecfdd6756b2f5833ec19cc3fc20","000063f95be93182f33ef48c608556a4aa163303"]
witness_node: /root/graphene/libraries/net/node.cpp:2488: void graphene::net::detail::node_impl::on_blockchain_item_ids_inventory_message(graphene::net::peer_connection*, const graphene::net::blockchain_item_ids_inventory_message&): Assertion `originating_peer->last_block_number_delegate_has_seen == _delegate->get_block_number(originating_peer->last_block_delegate_has_seen)' failed.
3006446ms th_a application.cpp:514 get_item ] Serving up block #15625
3006446ms th_a application.cpp:514 get_item ] Serving up block #15626
3006446ms th_a application.cpp:514 get_item ] Serving up block #15627
3006447ms th_a application.cpp:514 get_item ] Serving up block #15628
3006447ms th_a application.cpp:514 get_item ] Serving up block #15629
3009000ms th_a witness.cpp:182 block_production_loo ] Not producing block because it isn't my turn
3009139ms th_a application.cpp:388 handle_block ] Got block #25699 with time 2015-09-19T17:50:09 from network with latency of 138 ms from fox
3012000ms th_a witness.cpp:182 block_production_loo ] Not producing block because it isn't my turn
3012205ms th_a application.cpp:388 handle_block ] Got block #25700 with time 2015-09-19T17:50:12 from network with latency of 205 ms from init4
3015000ms th_a witness.cpp:182 block_production_loo ] Not producing block because it isn't my turn
3015329ms th_a application.cpp:388 handle_block ] Got block #25701 with time 2015-09-19T17:50:15 from network with latency of 329 ms from init1
3018000ms th_a witness.cpp:182 block_production_loo ] Not producing block because it isn't my turn
3018282ms th_a application.cpp:388 handle_block ] Got block #25702 with time 2015-09-19T17:50:18 from network with latency of 282 ms from delegate-clayop
3021000ms th_a witness.cpp:182 block_production_loo ] Not producing block because it isn't my turn
3021155ms th_a application.cpp:388 handle_block ] Got block #25703 with time 2015-09-19T17:50:21 from network with latency of 155 ms from init6
3024000ms th_a witness.cpp:182 block_production_loo ] Not producing block because it isn't my turn
3024380ms th_a application.cpp:388 handle_block ] Got block #25704 with time 2015-09-19T17:50:24 from network with latency of 380 ms from in.abit
3027000ms th_a witness.cpp:182 block_production_loo ] Not producing block because it isn't my turn
3027137ms th_a application.cpp:388 handle_block ] Got block #25705 with time 2015-09-19T17:50:27 from network with latency of 137 ms from init5
I'm updated to voting for 23 witnesses.Committee members have to be voted for separately ..Code: [Select]bitcube
calabiyau
dele-puppy
delegate-1.lafona
delegate-clayop
fox
in.abit
init0
init1
init10
init2
init3
init4
init5
init6
init7
init8
init9
riverhead
roadscape
spartako
wackou
xeldal
I also upgraded dele-puppy to a committee member so we can see if that helps.
I'm updated to voting for 23 witnesses.No use with current code and that whale. See my post https://bitsharestalk.org/index.php/topic,17962.msg237551.html#msg237551 and this issue https://github.com/cryptonomex/graphene/issues/330.
I also upgraded dele-puppy to a committee member so we can see if that helps.
To clarify:
Should the proxy vote be on "puppies" or "dele-puppy"?
Above it lists both.
I'm updated to voting for 23 witnesses.No use with current code and that whale. See my post https://bitsharestalk.org/index.php/topic,17962.msg237551.html#msg237551 and this issue https://github.com/cryptonomex/graphene/issues/330.
I also upgraded dele-puppy to a committee member so we can see if that helps.
Hope that devs will come and explain more.
To clarify:
Should the proxy vote be on "puppies" or "dele-puppy"?
Above it lists both.
puppies please
get_witness riverhead
{
"id": "1.6.3968",
"witness_account": "1.2.67253",
"last_aslot": 0,
"signing_key": "GPH6BJYGHftujnbttFFKX6YacnvsMd4sbJrbucg682GiU4vmXHTik",
"vote_id": "1:3967",
"total_votes": 0,
"url": "",
"total_missed": 0
}
unlocked >>>
2015-09-19T19:00:09 th_a:invoke handle_block operator() ] 0 total 2153652707602
2015-09-19T19:00:09 th_a:invoke handle_block operator() ] 1 total 2153652707602
2015-09-19T19:00:09 th_a:invoke handle_block operator() ] 2 total 2153652707602
2015-09-19T19:00:09 th_a:invoke handle_block operator() ] 3 total 2153652707602
2015-09-19T19:00:09 th_a:invoke handle_block operator() ] 4 total 2153652707602
2015-09-19T19:00:09 th_a:invoke handle_block operator() ] 5 total 2153652707602
2015-09-19T19:00:09 th_a:invoke handle_block operator() ] 6 total 2153652707602
2015-09-19T19:00:09 th_a:invoke handle_block operator() ] 7 total 2153652707602
2015-09-19T19:00:09 th_a:invoke handle_block operator() ] 8 total 2153652707602
2015-09-19T19:00:09 th_a:invoke handle_block operator() ] 9 total 2153652707602
2015-09-19T19:00:09 th_a:invoke handle_block operator() ] 10 total 2153652707602
2015-09-19T19:00:04 th_a:invoke handle_block operator() ] 11 total 2074182535280
2015-09-19T19:00:09 th_a:invoke handle_block operator() ] 623 total 1296529654343
2015-09-19T19:00:09 th_a:invoke handle_block operator() ] 1062 total 1296529654343
2015-09-19T19:00:09 th_a:invoke handle_block operator() ] 1526 total 1296529654343
2015-09-19T19:00:09 th_a:invoke handle_block operator() ] 1530 total 1296529654343
2015-09-19T19:00:09 th_a:invoke handle_block operator() ] 1537 total 1296529654343
2015-09-19T19:00:04 th_a:invoke handle_block operator() ] 1624 total 1217059482021
2015-09-19T19:00:09 th_a:invoke handle_block operator() ] 2103 total 1314652386072
2015-09-19T19:00:09 th_a:invoke handle_block operator() ] 3967 total 1296529654343
2015-09-19T19:00:09 th_a:invoke handle_block operator() ] 4231 total 1296529654343
2015-09-19T19:00:09 th_a:invoke handle_block operator() ] 4948 total 1296529654343
2015-09-19T19:00:09 th_a:invoke handle_block operator() ] 5267 total 2153652707602
2015-09-19T19:00:09 th_a:invoke handle_block operator() ] 5268 total 1314652386072
2015-09-19T19:00:02 th_a:invoke handle_block operator() ] 5269 total 8411563602
2015-09-19T19:00:09 th_a:invoke handle_block operator() ] 5270 total 92919771170
2015-09-19T19:00:09 th_a:invoke handle_block operator() ] 5271 total 79470172322
2015-09-19T19:00:02 th_a:invoke handle_block operator() ] stake_account 1.2.47247 voting_stake 7309344543864 num_witness 0
db_maint.cpp:437
2015-09-19T19:00:02 th_a:invoke handle_block operator() ] d._witness_count_histogram_buffer[0] = 7309354543864
db_maint.cpp:458
2015-09-19T19:00:02 th_a:invoke handle_block operator() ] d._total_voting_stake = 7309354543864 db_maint.cpp:472
not like this (although same num_witness=0)2015-09-19T19:00:02 th_a:invoke handle_block operator() ] stake_account 1.2.38993 voting_stake 8411563602 num_witness 0
db_maint.cpp:437
2015-09-19T19:00:02 th_a:invoke handle_block operator() ] stake_account 1.2.38993 voting_stake 8411563602 vote_for 5269 total 8411563602 db_maint.cpp:444
2015-09-19T19:00:02 th_a:invoke handle_block operator() ] d._witness_count_histogram_buffer[0] = 7318105091732
db_maint.cpp:458
2015-09-19T19:00:02 th_a:invoke handle_block operator() ] d._total_voting_stake = 7318105091732 db_maint.cpp:472
Where are you pulling all this vote info from abit?https://github.com/abitmore/graphene/tree/test3-patch1
291001ms th_a witness.cpp:179 block_production_loo ] Not producing block because production is disabled until we receive a recent block (see: --enable-stale-production)
292001ms th_a witness.cpp:179 block_production_loo ] Not producing block because production is disabled until we receive a recent block (see: --enable-stale-production)
293000ms th_a witness.cpp:179 block_production_loo ] Not producing block because production is disabled until we receive a recent block (see: --enable-stale-production)
294001ms th_a witness.cpp:179 block_production_loo ] Not producing block because production is disabled until we receive a recent block (see: --enable-stale-production)
295000ms th_a witness.cpp:179 block_production_loo ] Not producing block because production is disabled until we receive a recent block (see: --enable-stale-production)
296000ms th_a witness.cpp:179 block_production_loo ] Not producing block because production is disabled until we receive a recent block (see: --enable-stale-production)
297000ms th_a witness.cpp:179 block_production_loo ] Not producing block because production is disabled until we receive a recent block (see: --enable-stale-production)
298001ms th_a witness.cpp:179 block_production_loo ] Not producing block because production is disabled until we receive a recent block (see: --enable-stale-production)
299000ms th_a witness.cpp:179 block_production_loo ] Not producing block because production is disabled until we receive a recent block (see: --enable-stale-production)
Is the seed node '104.236.118.105:1776' working?
init8 witness node seems to be down.Yes. I think puppies can vote it out.
Done. Thanks for the heads up.init8 witness node seems to be down.Yes. I think puppies can vote it out.
witness ihashfury "id": "1.6.2562"
and
witness delegate.ihashfury "id": "1.6.1596"
are setup - if anyone would like to vote them in
You may need to import the ACTIVE_KEY too!seems I don't have my owner key imported into my vps. I won't be able to vote till after work. I'm sure someone else will vote you in.Are existing 0.9.2 delegates automatically imported as a witness or is it required to upgrade and create witness object? I'm hoping to skip sorting through the hundreds of balance id's for a proper balance that also existed on the 20th.
Thank you betax, your guide was very helpful. +5%
All existing delegates are imported as witnesses. When you get your node set up let us know and we will vote you in.
I believe I'm up now. ID: 1.6.1624Code: [Select]0 exception: unspecified
3030001 tx_missing_active_auth: missing required active authority
Missing Active Authority 1.2.63353
Agreed .. do you have a github account and can just do it?You may need to import the ACTIVE_KEY too!seems I don't have my owner key imported into my vps. I won't be able to vote till after work. I'm sure someone else will vote you in.Are existing 0.9.2 delegates automatically imported as a witness or is it required to upgrade and create witness object? I'm hoping to skip sorting through the hundreds of balance id's for a proper balance that also existed on the 20th.
Thank you betax, your guide was very helpful. +5%
All existing delegates are imported as witnesses. When you get your node set up let us know and we will vote you in.
I believe I'm up now. ID: 1.6.1624Code: [Select]0 exception: unspecified
3030001 tx_missing_active_auth: missing required active authority
Missing Active Authority 1.2.63353
I think it may be a good idea to put a note in the wiki file under the paragraph saying something like "if your active key is different from your owner key, you need to import active_key as well". It will be handy for people following the wiki instruction. @xeroc
I think owner_key is not required. Only need active_key.You may need to import the ACTIVE_KEY too!seems I don't have my owner key imported into my vps. I won't be able to vote till after work. I'm sure someone else will vote you in.Are existing 0.9.2 delegates automatically imported as a witness or is it required to upgrade and create witness object? I'm hoping to skip sorting through the hundreds of balance id's for a proper balance that also existed on the 20th.
Thank you betax, your guide was very helpful. +5%
All existing delegates are imported as witnesses. When you get your node set up let us know and we will vote you in.
I believe I'm up now. ID: 1.6.1624Code: [Select]0 exception: unspecified
3030001 tx_missing_active_auth: missing required active authority
Missing Active Authority 1.2.63353
I think it may be a good idea to put a note in the wiki file under the paragraph saying something like "if your active key is different from your owner key, you need to import active_key as well". It will be handy for people following the wiki instruction. @xeroc
Maybe that whale is bm. Isnt he handling some whale's stake? Maybe he wants to run some tests with init delegates.Yes, it's BM. And he's not voting. It causes the network to unable to have more than 11 witnesses.
Just speculating though. Dont take me too serious
Web wallet of test network https://graphene.bitshares.org/ is live now.can you give me some USD for test?
Try market: https://graphene.bitshares.org/#/exchange/trade/ABITUSDA_CORE
witness_node: /home/spartako/graphene/libraries/net/node.cpp:2488: void graphene::net::detail::node_impl::on_blockchain_item_ids_inventory_message(graphene::net::peer_connection*, const graphene::net::blockchain_item_ids_inventory_message&): Assertion `originating_peer->last_block_number_delegate_has_seen == _delegate->get_block_number(originating_peer->last_block_delegate_has_seen)' failed.
I wake up this morning with the witness crashed with this error:Code: [Select]witness_node: /home/spartako/graphene/libraries/net/node.cpp:2488: void graphene::net::detail::node_impl::on_blockchain_item_ids_inventory_message(graphene::net::peer_connection*, const graphene::net::blockchain_item_ids_inventory_message&): Assertion `originating_peer->last_block_number_delegate_has_seen == _delegate->get_block_number(originating_peer->last_block_delegate_has_seen)' failed.
13676d040372e2316","0000c1e66d1af92b560c6c632c9caade74c46d2d"]
2878300ms th_a application.cpp:695 get_blockchain_synop ] synopsis: ["0000c1dbab0c56a1818b4b96b7db480dde690c90","0000c1e1b20e19081ed1f7686732706fc60fba9a","0000c1e4adff2782c18a66e13676d040372e2316","0000c1e66d1af92b560c6c632c9caade74c46d2d"]
2878300ms th_a application.cpp:695 get_blockchain_synop ] synopsis: ["0000c1dbab0c56a1818b4b96b7db480dde690c90","0000c1e1b20e19081ed1f7686732706fc60fba9a","0000c1e4adff2782c18a66e13676d040372e2316","0000c1e66d1af92b560c6c632c9caade74c46d2d"]
witness_node: /home/calabiyau/graphene/libraries/net/node.cpp:2488: void graphene::net::detail::node_impl::on_blockchain_item_ids_inventory_message(graphene::net::peer_connection*, const graphene::net::blockchain_item_ids_inventory_message&): Assertion `originating_peer->last_block_number_delegate_has_seen == _delegate->get_block_number(originating_peer->last_block_delegate_has_seen)' failed.
Aborted (core dumped)
Do you have some CORE? Try borrow some USD from the bond market.Web wallet of test network https://graphene.bitshares.org/ is live now.can you give me some USD for test?
Try market: https://graphene.bitshares.org/#/exchange/trade/ABITUSDA_CORE
my account is altgo, thanks.
Hi what chain / snapshot are we using? **confused***
Hi what chain / snapshot are we using? **confused***
This is the chain:
https://github.com/cryptonomex/graphene/releases/tag/test3
If the seed node is not working, use this seed node: 188.165.233.53:1776Hi what chain / snapshot are we using? **confused***
This is the chain:
https://github.com/cryptonomex/graphene/releases/tag/test3
Thanks, that is what I thought. ;)
Edit: I had spun 3 witnesses for testing, hence my cli client was connecting to an old one, I had not killed from test_net 2. FYI if any on the same situation, although very unlikely.
It looks like there are now 23 active witnesses voted in and 77.3% witness participation.
I have pushed an update to the P2P code that might fix some nodes that (such as my init node) that got stuck on an orphan branch.
/home/spartako/graphene/libraries/chain/account_object.cpp:71:54: error: ‘props’ was not declared in this scope
share_type reserveed = cut_fee(network_cut, props.parameters.reserve_percent_of_fee);
^
libraries/chain/CMakeFiles/graphene_chain.dir/build.make:997: recipe for target 'libraries/chain/CMakeFiles/graphene_chain.dir/account_object.cpp.o' failed
make[2]: *** [libraries/chain/CMakeFiles/graphene_chain.dir/account_object.cpp.o] Error 1
CMakeFiles/Makefile2:787: recipe for target 'libraries/chain/CMakeFiles/graphene_chain.dir/all' failed
make[1]: *** [libraries/chain/CMakeFiles/graphene_chain.dir/all] Error 2
Makefile:113: recipe for target 'all' failed
make: *** [all] Error 2
It looks like there are now 23 active witnesses voted in and 77.3% witness participation.
I have pushed an update to the P2P code that might fix some nodes that (such as my init node) that got stuck on an orphan branch.
It looks like there are now 23 active witnesses voted in and 77.3% witness participation.
I have pushed an update to the P2P code that might fix some nodes that (such as my init node) that got stuck on an orphan branch.
Hi can anybody vote me in? my votes don't seem to appear. Thanks!
Forgot: betaxtrade and 1.6.5252
It looks like there are now 23 active witnesses voted in and 77.3% witness participation.
I have pushed an update to the P2P code that might fix some nodes that (such as my init node) that got stuck on an orphan branch.
Hi can anybody vote me in? my votes don't seem to appear. Thanks!
Forgot: betaxtrade and 1.6.5252
get_witness xeldal
{
"id": "1.6.4949",
"witness_account": "1.2.86459",
"last_aslot": 0,
"signing_key": "G......M",
"vote_id": "1:4948",
"total_votes": 0,
"url": "",
"total_missed": 0
}
Please vote me in: delegate-clayop 1.6.1538
witness ihashfury "id": "1.6.2562"
and
witness delegate.ihashfury "id": "1.6.1596"
are setup - if anyone would like to vote them in
betaxtrade
boombastic
calabiyau
dele-puppy
delegate-1.lafona
delegate-clayop
delegate-dev3.btsnow
delegate.ihashfury
fox
ihashfury
in.abit
init0
init1
init10
init2
init3
init4
init5
init6
init7
init8
init9
mr.agsexplorer
mrs.agsexplorer
roadscape
spartako
wackou
xeldal
Let me know when your node is up and ready, and I'll throw my vote your way. Last I checked I had about 12M bts proxied through puppies.
76079 '1.6.4' 'init3' 81
76078 '1.6.7' 'init6' 242
76077 '1.6.828' 'boombastic' 116
76076 '1.6.12' 'init11' 98
76075 '1.6.1538' 'delegate-clayop' 11
76074 '1.6.11' 'init10' 47
76073 '1.6.5252' 'betaxtrade' 5
76072 '1.6.4232' 'spartako' 0
76071 '1.6.4949' 'xeldal' 0
76070 '1.6.2' 'init1' 0
76069 '1.6.3360' 'mrs.agsexplorer' 0
76068 '1.6.3356' 'mr.agsexplorer' 12
76067 '1.6.9' 'init8' 147
76066 '1.6.1543' 'delegate-dev3.btsnow' 77
76065 '1.6.3' 'init2' 89
76064 '1.6.5247' 'in.abit' 27
76063 '1.6.10' 'init9' 52
76062 '1.6.5' 'init4' 50
76061 '1.6.6' 'init5' 43
76060 '1.6.8' 'init7' 0
I tried to spam the network and the results are quite impressive! Spamming alone I reached 242 tx per block (80 tx/sec) we can reach bigger number in this test netSeems I need to tune the max value of the gauge quite soon ..Code: [Select]76079 '1.6.4' 'init3' 81
76078 '1.6.7' 'init6' 242
76077 '1.6.828' 'boombastic' 116
76076 '1.6.12' 'init11' 98
76075 '1.6.1538' 'delegate-clayop' 11
76074 '1.6.11' 'init10' 47
76073 '1.6.5252' 'betaxtrade' 5
76072 '1.6.4232' 'spartako' 0
76071 '1.6.4949' 'xeldal' 0
76070 '1.6.2' 'init1' 0
76069 '1.6.3360' 'mrs.agsexplorer' 0
76068 '1.6.3356' 'mr.agsexplorer' 12
76067 '1.6.9' 'init8' 147
76066 '1.6.1543' 'delegate-dev3.btsnow' 77
76065 '1.6.3' 'init2' 89
76064 '1.6.5247' 'in.abit' 27
76063 '1.6.10' 'init9' 52
76062 '1.6.5' 'init4' 50
76061 '1.6.6' 'init5' 43
76060 '1.6.8' 'init7' 0
witness_node: /home/spartako/graphene/libraries/net/node.cpp:2319: std::vector<fc::ripemd160> graphene::net::detail::node_impl::create_blockchain_synopsis_for_peer(const graphene::net::peer_connection*): Assertion `synopsis.back() == original_ids_of_items_to_get->back()' failed.
please also vote for wackou (1.6.5248), witness running and ready to produce blocks! :)
My delegate just crashed :(Code: [Select]witness_node: /home/spartako/graphene/libraries/net/node.cpp:2319: std::vector<fc::ripemd160> graphene::net::detail::node_impl::create_blockchain_synopsis_for_peer(const graphene::net::peer_connection*): Assertion `synopsis.back() == original_ids_of_items_to_get->back()' failed.
I'm syncing...
If more witnesses needed, please vote me in.
( runs in vps with latencies around 150ms )
get_witness jtm1
{
"id": "1.6.5251",
"witness_account": "1.2.92002"
get_witness ihashfury
{
"id": "1.6.2562",
"witness_account": "1.2.38577",
"last_aslot": 0,
"signing_key": "GPH5yzrzYt3VLaN8ksyv6ypXZpZ22k2mJA4xMvv5eznskuMbNQ8Mj",
"vote_id": "1:2561",
"total_votes": 0,
"url": "",
"total_missed": 0
}
unlocked >>> get_witness delegate.ihashfury
get_witness delegate.ihashfury
{
"id": "1.6.1596",
"witness_account": "1.2.22473",
"last_aslot": 0,
"signing_key": "GPH53CQ3wX2jt9bmJTY2cFpcKoouauB16QdqSfe6fVCCezSGRkhvT",
"vote_id": "1:1595",
"total_votes": 0,
"url": "",
"total_missed": 0
}
unlocked >>>
get_witness roadscape
{
"id": "1.6.5249",
"witness_account": "1.2.67429"
Since are are only brave members in this thread, I'll just drop this here and ask those that run an active witness to publish feeds if possible:Great work +5%
https://github.com/xeroc/python-graphenelib/blob/develop/scripts/pricefeeds.py
Run this script once every hour or 30 minutes .. I will improve it over time ..
If you run into trouble you can post here or in the other thread: https://bitsharestalk.org/index.php/topic,18382.new.html#new
Good luck :)
PS. IIRC it only publishes feeds for USD EUR and CNY if I am not mistaken .. if you want to pimp it on your own, please send a pull request ..
get_witness delegate-1.lafona
{
"id": "1.6.1531",
"witness_account": "1.2.22396",
"last_aslot": 0,
"signing_key": "GPH5DCL5nbhL13sXBh1mwQp5pUBSw7rmwjWeiiy5b2Z2UxuYf8spU",
"vote_id": "1:1530",
"total_votes": 0,
"url": "",
"total_missed": 0
}
vote_for_witness delegate-1.lafona delegate-1.lafona true true
{
"ref_block_num": 12218,
"ref_block_prefix": 4139592174,
"expiration": "2015-09-21T16:48:51",
"operations": [[
6,{
"fee": {
"amount": 1002929,
"asset_id": "1.3.0"
},
"account": "1.2.22396",
"new_options": {
"memo_key": "GPH5DCL5nbhL13sXBh1mwQp5pUBSw7rmwjWeiiy5b2Z2UxuYf8spU",
"voting_account": "1.2.5",
"num_witness": 0,
"num_committee": 0,
"votes": [
"1:1530"
],
"extensions": []
},
"extensions": []
}
]
],
"extensions": [],
"signatures": [
"1f7e93d12f8eea0d4844aa4a804fd41b387cecad8344224e7f8d46acda96880f2b372027a57468750770e909eedf7fda3f4af14c59e66cafd38f76589241999396"
]
}
Also seem to be having the same issue as ihashfury.Code: [Select]get_witness delegate-1.lafona
{
"id": "1.6.1531",
"witness_account": "1.2.22396",
"last_aslot": 0,
"signing_key": "GPH5DCL5nbhL13sXBh1mwQp5pUBSw7rmwjWeiiy5b2Z2UxuYf8spU",
"vote_id": "1:1530",
"total_votes": 0,
"url": "",
"total_missed": 0
}
I even voted for myself, so it should show some votes.Code: [Select]vote_for_witness delegate-1.lafona delegate-1.lafona true true
{
"ref_block_num": 12218,
"ref_block_prefix": 4139592174,
"expiration": "2015-09-21T16:48:51",
"operations": [[
6,{
"fee": {
"amount": 1002929,
"asset_id": "1.3.0"
},
"account": "1.2.22396",
"new_options": {
"memo_key": "GPH5DCL5nbhL13sXBh1mwQp5pUBSw7rmwjWeiiy5b2Z2UxuYf8spU",
"voting_account": "1.2.5",
"num_witness": 0,
"num_committee": 0,
"votes": [
"1:1530"
],
"extensions": []
},
"extensions": []
}
]
],
"extensions": [],
"signatures": [
"1f7e93d12f8eea0d4844aa4a804fd41b387cecad8344224e7f8d46acda96880f2b372027a57468750770e909eedf7fda3f4af14c59e66cafd38f76589241999396"
]
}
bitcube and riverhead are not producing blocks... any particular reason?
Also seem to be having the same issue as ihashfury.Code: [Select]get_witness delegate-1.lafona
{
"id": "1.6.1531",
"witness_account": "1.2.22396",
"last_aslot": 0,
"signing_key": "GPH5DCL5nbhL13sXBh1mwQp5pUBSw7rmwjWeiiy5b2Z2UxuYf8spU",
"vote_id": "1:1530",
"total_votes": 0,
"url": "",
"total_missed": 0
}
I even voted for myself, so it should show some votes.Code: [Select]vote_for_witness delegate-1.lafona delegate-1.lafona true true
{
"ref_block_num": 12218,
"ref_block_prefix": 4139592174,
"expiration": "2015-09-21T16:48:51",
"operations": [[
6,{
"fee": {
"amount": 1002929,
"asset_id": "1.3.0"
},
"account": "1.2.22396",
"new_options": {
"memo_key": "GPH5DCL5nbhL13sXBh1mwQp5pUBSw7rmwjWeiiy5b2Z2UxuYf8spU",
"voting_account": "1.2.5",
"num_witness": 0,
"num_committee": 0,
"votes": [
"1:1530"
],
"extensions": []
},
"extensions": []
}
]
],
"extensions": [],
"signatures": [
"1f7e93d12f8eea0d4844aa4a804fd41b387cecad8344224e7f8d46acda96880f2b372027a57468750770e909eedf7fda3f4af14c59e66cafd38f76589241999396"
]
}
I'm up-dating to latest master to see if it helps
(gdb) run --rpc-endpoint "127.192.168.1.11:8090" -d test_net_3 -s "104.236.118.105:1776" --genesis-json sep-18-testnet-genesis.json --resync-blockchain
Starting program: /home/james/github/graphene/programs/witness_node/witness_node --rpc-endpoint "127.192.168.1.11:8090" -d test_net_3 -s "104.236.118.105:1776" --genesis-json sep-18-testnet-genesis.json --resync-blockchain
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
3104986ms th_a witness.cpp:83 plugin_initialize ] witness plugin: plugin_initialize() begin
3104987ms th_a witness.cpp:93 plugin_initialize ] key_id_to_wif_pair: ["GPH6BJYGH.....","5....."]
3104987ms th_a witness.cpp:111 plugin_initialize ] witness plugin: plugin_initialize() end
3104987ms th_a db_management.cpp:95 wipe ] Wiping database
3104990ms th_a object_database.cpp:82 wipe ] Wiping object_database.
3104991ms th_a application.cpp:301 startup ] Detected unclean shutdown. Replaying blockchain...
3104992ms th_a application.cpp:242 operator() ] Initializing database...
3131559ms th_a db_management.cpp:42 reindex ] reindexing blockchain
3131560ms th_a db_management.cpp:95 wipe ] Wiping database
3131562ms th_a object_database.cpp:82 wipe ] Wiping object_database.
3144646ms th_a db_management.cpp:49 reindex ] !no last block
3144647ms th_a db_management.cpp:50 reindex ] last_block:
3144682ms th_a thread.cpp:95 thread ] name:ntp tid:140737315571456
3144682ms ntp ntp.cpp:77 request_now ] resolving... ["pool.ntp.org",123]
[New Thread 0x7ffff4135700 (LWP 13774)]
[New Thread 0x7ffff4936700 (LWP 13773)]
[New Thread 0x7ffff5137700 (LWP 13772)]
[New Thread 0x7ffff5b38700 (LWP 13771)]
3144691ms th_a thread.cpp:95 thread ] name:p2p tid:140737296688896
3144723ms ntp ntp.cpp:81 request_now ] sending request to 76.191.88.3:123
3144728ms th_a application.cpp:122 reset_p2p_node ] Adding seed node 104.236.118.105:1776
3144732ms th_a application.cpp:134 reset_p2p_node ] Configured p2p node to listen on 0.0.0.0:60399
3144733ms th_a application.cpp:184 reset_websocket_serv ] Configured websocket rpc to listen on 127.192.168.1.11:8090
3144735ms th_a main.cpp:176 main ] Exiting with error:
13 N5boost16exception_detail10clone_implINS0_19error_info_injectorINS_6system12system_errorEEEEE: Invalid argument
Invalid argument: error converting string to IP endpoint
{"what":"Invalid argument"}
th_a ip.cpp:84 from_string
{}
th_a application.cpp:187 reset_websocket_server
{}
th_a application.cpp:337 startup
3144808ms ntp ntp.cpp:147 read_loop ] received ntp reply from 76.191.88.3:123
3144808ms ntp ntp.cpp:161 read_loop ] ntp offset: -2899, round_trip_delay 80013
3144808ms ntp ntp.cpp:177 read_loop ] ntp_delta_time updated to -2899
witness_node: /home/james/github/graphene/libraries/fc/src/thread/thread_d.hpp:370: bool fc::thread_d::start_next_fiber(bool): Assertion `std::current_exception() == std::exception_ptr()' failed.
Program received signal SIGABRT, Aborted.
0x00007ffff6979267 in __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:55
55 ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.
(gdb) trace
Tracepoint 1 at 0x7ffff6979267: file ../sysdeps/unix/sysv/linux/raise.c, line 55.
(gdb)
New witness_node crashed shortly after startup.Code: [Select](gdb) run --rpc-endpoint "127.192.168.1.11:8090" -d test_net_3 -s "104.236.118.105:1776" --genesis-json sep-18-testnet-genesis.json --resync-blockchain
Starting program: /home/james/github/graphene/programs/witness_node/witness_node --rpc-endpoint "127.192.168.1.11:8090" -d test_net_3 -s "104.236.118.105:1776" --genesis-json sep-18-testnet-genesis.json --resync-blockchain
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
3104986ms th_a witness.cpp:83 plugin_initialize ] witness plugin: plugin_initialize() begin
3104987ms th_a witness.cpp:93 plugin_initialize ] key_id_to_wif_pair: ["GPH6BJYGH.....","5....."]
3104987ms th_a witness.cpp:111 plugin_initialize ] witness plugin: plugin_initialize() end
3104987ms th_a db_management.cpp:95 wipe ] Wiping database
3104990ms th_a object_database.cpp:82 wipe ] Wiping object_database.
3104991ms th_a application.cpp:301 startup ] Detected unclean shutdown. Replaying blockchain...
3104992ms th_a application.cpp:242 operator() ] Initializing database...
3131559ms th_a db_management.cpp:42 reindex ] reindexing blockchain
3131560ms th_a db_management.cpp:95 wipe ] Wiping database
3131562ms th_a object_database.cpp:82 wipe ] Wiping object_database.
3144646ms th_a db_management.cpp:49 reindex ] !no last block
3144647ms th_a db_management.cpp:50 reindex ] last_block:
3144682ms th_a thread.cpp:95 thread ] name:ntp tid:140737315571456
3144682ms ntp ntp.cpp:77 request_now ] resolving... ["pool.ntp.org",123]
[New Thread 0x7ffff4135700 (LWP 13774)]
[New Thread 0x7ffff4936700 (LWP 13773)]
[New Thread 0x7ffff5137700 (LWP 13772)]
[New Thread 0x7ffff5b38700 (LWP 13771)]
3144691ms th_a thread.cpp:95 thread ] name:p2p tid:140737296688896
3144723ms ntp ntp.cpp:81 request_now ] sending request to 76.191.88.3:123
3144728ms th_a application.cpp:122 reset_p2p_node ] Adding seed node 104.236.118.105:1776
3144732ms th_a application.cpp:134 reset_p2p_node ] Configured p2p node to listen on 0.0.0.0:60399
3144733ms th_a application.cpp:184 reset_websocket_serv ] Configured websocket rpc to listen on 127.192.168.1.11:8090
3144735ms th_a main.cpp:176 main ] Exiting with error:
13 N5boost16exception_detail10clone_implINS0_19error_info_injectorINS_6system12system_errorEEEEE: Invalid argument
Invalid argument: error converting string to IP endpoint
{"what":"Invalid argument"}
th_a ip.cpp:84 from_string
{}
th_a application.cpp:187 reset_websocket_server
{}
th_a application.cpp:337 startup
3144808ms ntp ntp.cpp:147 read_loop ] received ntp reply from 76.191.88.3:123
3144808ms ntp ntp.cpp:161 read_loop ] ntp offset: -2899, round_trip_delay 80013
3144808ms ntp ntp.cpp:177 read_loop ] ntp_delta_time updated to -2899
witness_node: /home/james/github/graphene/libraries/fc/src/thread/thread_d.hpp:370: bool fc::thread_d::start_next_fiber(bool): Assertion `std::current_exception() == std::exception_ptr()' failed.
Program received signal SIGABRT, Aborted.
0x00007ffff6979267 in __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:55
55 ../sysdeps/unix/sysv/linux/raise.c: No such file or directory.
(gdb) trace
Tracepoint 1 at 0x7ffff6979267: file ../sysdeps/unix/sysv/linux/raise.c, line 55.
(gdb)
riverhead, ihashfury, and bitcube are currently misbehaving (missing blocks) on the test network.
get_object 1.13.46
[{
"id": "1.13.46",
"owner": "1.2.22388",
"balance": {
"amount": 255000000,
"asset_id": "1.3.0"
},
"policy": [
1,{
"vesting_seconds": 86400,
"start_claim": "1970-01-01T00:00:00",
"coin_seconds_earned": "3100167000000",
"coin_seconds_earned_last_update": "2015-09-21T19:57:21"
}
]
}
]
but my witness node shows amount as 311000000. Everything else is the same except for coin seconds earned. Both nodes show as being synced. What would cause this device to show up differently on different boxes?
@bytemaster can we test the affiliate system, or is this still in development?
The Fox witness node is operational. Thanks in advance for your votes.Code: [Select]get_witness fox
{
"id": "1.6.2104",
...
"total_votes": "1306663376502",
...
}
>>get_global_properties
],
"active_witnesses": [
"1.6.1",
...
"1.6.5252"
]
do delegates or witnesses produce feeds?
All witnesses please upgrade to the latest master. We have checked in a few "hard-forking" fixes for publishing price feeds. This hardfork was required due to a misconfigured genesis state for bitassets preventing us from publishing feeds. Please do not attempt to publish price feeds for at least 24 hours to give all testers a chance to upgrade to the latest.
Fox I voted you in. The least approved witness is bitcube which as 13M votes...
All witnesses please upgrade to the latest master. We have checked in a few "hard-forking" fixes for publishing price feeds. This hardfork was required due to a misconfigured genesis state for bitassets preventing us from publishing feeds. Please do not attempt to publish price feeds for at least 24 hours to give all testers a chance to upgrade to the latest.
Fox I voted you in. The least approved witness is bitcube which as 13M votes...
All witnesses please upgrade to the latest master. We have checked in a few "hard-forking" fixes for publishing price feeds. This hardfork was required due to a misconfigured genesis state for bitassets preventing us from publishing feeds. Please do not attempt to publish price feeds for at least 24 hours to give all testers a chance to upgrade to the latest.in.abit is upgraded to latest commit.
Fox I voted you in. The least approved witness is bitcube which as 13M votes...
All witnesses please upgrade to the latest master. We have checked in a few "hard-forking" fixes for publishing price feeds. This hardfork was required due to a misconfigured genesis state for bitassets preventing us from publishing feeds. Please do not attempt to publish price feeds for at least 24 hours to give all testers a chance to upgrade to the latest.
Fox I voted you in. The least approved witness is bitcube which as 13M votes...
Forgive my ignorance.
How to "upgrade to latest master"?
Is this just "git pull master" and restart witness?, or does this require a complete rebuild and "git checkout master" ??
I assume "git checkout test3" would just be the same thing I'm already running.
git checkout master
git pull
git submodule update --init --recursive
cmake .
make
Upgraded to the master, during the update I missed 2 blocks.
What is the best practice for not missing any block during an update?
What is the best practice for not missing any block during an update?
get_witness delegate.baozi
{
"id": "1.6.1569",
"witness_account": "1.2.22439",
What is the minimum hardware requirement to participate in the testnet?Seed node (stable): Shared CPU, 768MB RAM
What is the best practice for not missing any block during an update?
All witnesses please upgrade to the latest master. We have checked in a few "hard-forking" fixes for publishing price feeds. This hardfork was required due to a misconfigured genesis state for bitassets preventing us from publishing feeds. Please do not attempt to publish price feeds for at least 24 hours to give all testers a chance to upgrade to the latest.
Fox I voted you in. The least approved witness is bitcube which as 13M votes...
634472ms th_a application.cpp:388 handle_block ] Got block #97888 with time 2015-09-22T10:10:33 from network with latency of 25101 ms from init8
636002ms th_a witness.cpp:182 block_production_loo ] Not producing block because it isn't my turn
636526ms th_a application.cpp:388 handle_block ] Got block #97889 with time 2015-09-22T10:10:36 from network with latency of 24155 ms from init6
637996ms th_a application.cpp:518 get_item ] Serving up block #97889
639429ms th_a application.cpp:388 handle_block ] Got block #97890 with time 2015-09-22T10:10:39 from network with latency of 24059 ms from init3
What is the best practice for not missing any block during an update?
The update_witness command in CLI wallet allows you to change your witness's block signing key. This architecture allows several different downtime-free update procedures according to your specific hosting situation:
...
What is the best practice for not missing any block during an update?
The update_witness command in CLI wallet allows you to change your witness's block signing key. This architecture allows several different downtime-free update procedures according to your specific hosting situation:
...
Thanks theoretical! Great work!
What is the best practice for not missing any block during an update?
The update_witness command in CLI wallet allows you to change your witness's block signing key. This architecture allows several different downtime-free update procedures according to your specific hosting situation:
(1) Create new witness on new server with new block signing key (you can use suggest_brain_key in CLI wallet, or the get_dev_key binary, to generate a new key). When the new witness is synced, run update_witness to change your key, shutdown old witness and delete old server. Good method if you use a pay-by-the-hour hosting provider that lets you quickly create and destroy servers (e.g. DigitalOcean).
...
I think I'll write this up in a wiki article sometime this week
After you update new signing key to witness account, the old witness node won't produce blocks 'cause it doesn't have correct key to sign blocks.
What is the best practice for not missing any block during an update?
The update_witness command in CLI wallet allows you to change your witness's block signing key. This architecture allows several different downtime-free update procedures according to your specific hosting situation:
(1) Create new witness on new server with new block signing key (you can use suggest_brain_key in CLI wallet, or the get_dev_key binary, to generate a new key). When the new witness is synced, run update_witness to change your key, shutdown old witness and delete old server. Good method if you use a pay-by-the-hour hosting provider that lets you quickly create and destroy servers (e.g. DigitalOcean).
(2) Create new witness on the same server with new block signing key in different datadir. When new witness is synced, update_witness to change your block signing key and shut down old witness. Good method if you have a large server capable of running two witnesses at once (e.g. dedicated host).
(3) Create temporary witness on your own personal machine. When it is synced, update_witness to change your block signing key and cause the temporary witness to start producing. Then shut down the old witness and spin up the new witness on the same server; it is okay if it takes some time, because your temporary witness is still signing blocks. Then when new witness is ready, run update_witness again to change your block signing key back (or create another new key). Good method if you have a decent personal machine you can occasionally put on witness duty for a few blocks when you're doing an upgrade, and don't want to mess with multiple VPS's as in (1) or pay for a large server as in (2).
Also note that the block signing key can be different from the active and owner keys which control account funds. The only key which needs to live unencrypted on a machine with 24/7 internet connectivity is the block signing key; if an attacker compromises the server, the only thing they can do with the block signing key is sign blocks.
I designed this system, and my goal was to give witnesses better options for dealing with the various IT headaches of signing blocks in DPOS.
I just gave update_witness quite a real-world test on this testnet -- I initially ran all of the init witnesses in a single process on the cloud server that I used to create the testnet, then bytemaster and I migrated a bunch of them to different machines within the first day. In prepping them for the hardfork, I've had to shut down and re-create the witnesses, and also migrated them to better balance them between the multiple machines. I used the update_witness command for all of this and achieved it with minimal downtime. In particular for today's hardfork upgrade, I had no downtime on block signing during the upgrade / migration, even though I had to upgrade 8 witnesses on multiple machines, and I also migrated some of those witnesses to better balance them between machines. The update_witness code and its supporting logic in the block production loop are rock solid!
Some of my init witnesses have been down, but that's mostly due to issues in the p2p layer, the worst of which are resolved in the latest code.
I think I'll write this up in a wiki article sometime this week
This morning I came in to find all 3 of the nodes I run having issues. I am looking into the cause. After the successful run yesterday I am confident we are very close to eliminating all of the edge cases.
Will update things soon.
Very good procedure, one question how do you get the WIF key for the blockchain signing key?
Maybe same issue as my nodes encountered. I did resync. Will try replay next time (if happen again).This morning I came in to find all 3 of the nodes I run having issues. I am looking into the cause. After the successful run yesterday I am confident we are very close to eliminating all of the edge cases.
Will update things soon.
It looks like all of my nodes went down because they got into an inconsistent blockchain state regarding a vesting balance object. A replay of the blockchain fixed it.
(Anyone notice how much faster replaying the blockchain is with 1 hour maitenance intervals??!! )
Very good procedure, one question how do you get the WIF key for the blockchain signing key?
You can create with a command in the cli_wallet: suggest_brain_key
Import that private key and update your config.ini on the new witness. Fire up the new witness and once sync'd update_witness <name> <public key from above> true
I missed the update to the config.ini which is why I was down for a few minutes.
do you have to enable stale block production to produce blocks?Don't enable that
do you have to enable stale block production to produce blocks?Don't enable that
Stupid question: How do I create an account in the GUI? I've tried importing a .json and creating an account with the "Create Account" button. Does it need to be done via the Javascript CLI? I feel like I'm missing something obvious - hopefully not as bad as my IP address typo but....
Update: Seems this is a Firefox issue (at least my copy). Chrome displayed the two password fields as expected but still returns the same error.
(http://i.imgur.com/eLwdqd2.png)
Assuming you are on the welcome page, the problem is that it is configured for use with a faucet backend that isn't running on your computer.
My witnesses got out of sync 2 hours ago. Restart didn't fix it. Resyncing.
Will check the logs.
//Update:
1. resync worked.
2. issue submitted https://github.com/cryptonomex/graphene/issues/336
Stupid question: How do I create an account in the GUI? I've tried importing a .json and creating an account with the "Create Account" button. Does it need to be done via the Javascript CLI? I feel like I'm missing something obvious - hopefully not as bad as my IP address typo but....
Update: Seems this is a Firefox issue (at least my copy). Chrome displayed the two password fields as expected but still returns the same error.
(http://i.imgur.com/eLwdqd2.png)
Assuming you are on the welcome page, the problem is that it is configured for use with a faucet backend that isn't running on your computer.
I just observed some nice flooding of the network with a single block containing 192 transactions and during the flooding there were no missed blocks.
There are currently 33 active witnesses most of which are on unique nodes with 100% participation.
It looks like things have really stabilized with this test network which is a really good sign that 3 weeks from now the upgrade will go smoothly.
Great work everyone, keep up the testing!
node bin/flood.js spartako spartako1 ws://127.0.0.1:8099 200
I was spamming the network alone with this simple code:
https://github.com/spartako82/node-grapheneCode: [Select]node bin/flood.js spartako spartako1 ws://127.0.0.1:8099 200
If more people join the spam I think we can reach great results
module.js:340
throw err;
^
Error: Cannot find module 'lib/'
at Function.Module._resolveFilename (module.js:338:15)
at Function.Module._load (module.js:280:25)
at Module.require (module.js:364:17)
at require (module.js:380:17)
at Object.<anonymous> (/home/clayop/node-graphene/bin/flood.js:4:8)
at Module._compile (module.js:456:26)
at Object.Module._extensions..js (module.js:474:10)
at Module.load (module.js:356:32)
at Function.Module._load (module.js:312:12)
at Function.Module.runMain (module.js:497:10)
I was spamming the network alone with this simple code:
https://github.com/spartako82/node-grapheneCode: [Select]node bin/flood.js spartako spartako1 ws://127.0.0.1:8099 200
If more people join the spam I think we can reach great results
I got error messages.Code: [Select]module.js:340
throw err;
^
Error: Cannot find module 'lib/'
at Function.Module._resolveFilename (module.js:338:15)
at Function.Module._load (module.js:280:25)
at Module.require (module.js:364:17)
at require (module.js:380:17)
at Object.<anonymous> (/home/clayop/node-graphene/bin/flood.js:4:8)
at Module._compile (module.js:456:26)
at Object.Module._extensions..js (module.js:474:10)
at Module.load (module.js:356:32)
at Function.Module._load (module.js:312:12)
at Function.Module.runMain (module.js:497:10)
1584002ms th_a witness.cpp:176 block_production_loo ] Generated block #121017 with timestamp 2015-09-23T06:26:24 at time 2015-09-23T06:26:24
Updated the master, should be fixed now
~/node-graphene# nodejs bin/flood.js bitcube bitcube ws://127.0.0.1:8090 200
******* 0*******
ERROR 10 assert_exception: Assert Exception
itr != _by_name.end(): no method with name 'transfer'
{"name":"transfer","api":[["cancel_all_subscriptions",4],["get_account_balances",21],["get_account_by_name",16],["get_account_count",20],["get_account_references",17],["get_accounts",14],["get_assets",26],["get_balance_objects",23],["get_blinded_balances",52],["get_block",6],["get_block_header",5],["get_call_orders",30],["get_chain_id",11],["get_chain_properties",8],["get_committee_member_by_account",40],["get_committee_members",39],["get_config",10],["get_dynamic_global_properties",12],["get_full_accounts",15],["get_global_properties",9],["get_key_references",13],["get_limit_orders",29],["get_margin_positions",32],["get_named_account_balances",22],["get_objects",0],["get_potential_signatures",46],["get_proposed_transactions",51],["get_required_fees",50],["get_required_signatures",45],["get_settle_orders",31],["get_transaction",7],["get_transaction_hex",44],["get_vested_balances",24],["get_vesting_balances",25],["get_witness_by_account",36],["get_witness_count",38],["get_witnesses",35],["get_workers_by_account",42],["list_assets",27],["lookup_account_names",18],["lookup_accounts",19],["lookup_asset_symbols",28],["lookup_committee_member_accounts",41],["lookup_vote_ids",43],["lookup_witness_accounts",37],["set_block_applied_callback",3],["set_pending_transaction_callback",2],["set_subscribe_callback",1],["subscribe_to_market",33],["unsubscribe_from_market",34],["validate_transaction",49],["verify_account_authority",48],["verify_authority",47]]}
th_a api_connection.hpp:84 call
Updated the master, should be fixed nowCode: [Select]~/node-graphene# nodejs bin/flood.js bitcube bitcube ws://127.0.0.1:8090 200
******* 0*******
ERROR 10 assert_exception: Assert Exception
itr != _by_name.end(): no method with name 'transfer'
{"name":"transfer","api":[["cancel_all_subscriptions",4],["get_account_balances",21],["get_account_by_name",16],["get_account_count",20],["get_account_references",17],["get_accounts",14],["get_assets",26],["get_balance_objects",23],["get_blinded_balances",52],["get_block",6],["get_block_header",5],["get_call_orders",30],["get_chain_id",11],["get_chain_properties",8],["get_committee_member_by_account",40],["get_committee_members",39],["get_config",10],["get_dynamic_global_properties",12],["get_full_accounts",15],["get_global_properties",9],["get_key_references",13],["get_limit_orders",29],["get_margin_positions",32],["get_named_account_balances",22],["get_objects",0],["get_potential_signatures",46],["get_proposed_transactions",51],["get_required_fees",50],["get_required_signatures",45],["get_settle_orders",31],["get_transaction",7],["get_transaction_hex",44],["get_vested_balances",24],["get_vesting_balances",25],["get_witness_by_account",36],["get_witness_count",38],["get_witnesses",35],["get_workers_by_account",42],["list_assets",27],["lookup_account_names",18],["lookup_accounts",19],["lookup_asset_symbols",28],["lookup_committee_member_accounts",41],["lookup_vote_ids",43],["lookup_witness_accounts",37],["set_block_applied_callback",3],["set_pending_transaction_callback",2],["set_subscribe_callback",1],["subscribe_to_market",33],["unsubscribe_from_market",34],["validate_transaction",49],["verify_account_authority",48],["verify_authority",47]]}
th_a api_connection.hpp:84 call
Any idea?
./cli_wallet -w wallet.json --chain-id 0f8b631d7a9dfebf16d6776fab96b629a14429762bf9c3eb95db1e4e4af637a4 -s ws://127.0.0.1:8090 -r 127.0.0.1:8099
~/node-graphene# nodejs bin/flood.js bitcube bitcube1 ws://127.0.0.1:8099 200
~/node-graphene# nodejs bin/flood.js bitcube bitcube1 ws://127.0.0.1:8099 10
~/node-graphene# nodejs bin/lastBlocks.js ws://127.0.0.1:8090 30
Updated the master, should be fixed nowCode: [Select]~/node-graphene# nodejs bin/flood.js bitcube bitcube ws://127.0.0.1:8090 200
******* 0*******
ERROR 10 assert_exception: Assert Exception
itr != _by_name.end(): no method with name 'transfer'
{"name":"transfer","api":[["cancel_all_subscriptions",4],["get_account_balances",21],["get_account_by_name",16],["get_account_count",20],["get_account_references",17],["get_accounts",14],["get_assets",26],["get_balance_objects",23],["get_blinded_balances",52],["get_block",6],["get_block_header",5],["get_call_orders",30],["get_chain_id",11],["get_chain_properties",8],["get_committee_member_by_account",40],["get_committee_members",39],["get_config",10],["get_dynamic_global_properties",12],["get_full_accounts",15],["get_global_properties",9],["get_key_references",13],["get_limit_orders",29],["get_margin_positions",32],["get_named_account_balances",22],["get_objects",0],["get_potential_signatures",46],["get_proposed_transactions",51],["get_required_fees",50],["get_required_signatures",45],["get_settle_orders",31],["get_transaction",7],["get_transaction_hex",44],["get_vested_balances",24],["get_vesting_balances",25],["get_witness_by_account",36],["get_witness_count",38],["get_witnesses",35],["get_workers_by_account",42],["list_assets",27],["lookup_account_names",18],["lookup_accounts",19],["lookup_asset_symbols",28],["lookup_committee_member_accounts",41],["lookup_vote_ids",43],["lookup_witness_accounts",37],["set_block_applied_callback",3],["set_pending_transaction_callback",2],["set_subscribe_callback",1],["subscribe_to_market",33],["unsubscribe_from_market",34],["validate_transaction",49],["verify_account_authority",48],["verify_authority",47]]}
th_a api_connection.hpp:84 call
Any idea?
You have to point to the wallet-url and not the witness url.
You can do that in this way for example:Code: [Select]./cli_wallet -w wallet.json --chain-id 0f8b631d7a9dfebf16d6776fab96b629a14429762bf9c3eb95db1e4e4af637a4 -s ws://127.0.0.1:8090 -r 127.0.0.1:8099
Moreover the accounts must be different because transfer doesn't work if you transfer to yourself. So create an other account (like bitcube1)
Unlock your wallet and use this command:Code: [Select]~/node-graphene# nodejs bin/flood.js bitcube bitcube1 ws://127.0.0.1:8099 200
Can you please post a link to node-graphene.. can't find it on github.
Thanks.
Newbie question - do I need to run a witness in order to run the flood script?
Thank you, I understand this.
The documentation talks about a local testnet. How do I connect my newly default witness node to the real testnet, are there seed node ips I need to specifically connect to?
./witness_node --rpc-endpoint "127.0.0.1:8090" --genesis-json sep-18-testnet-genesis.json -d witness_dir/ -s 104.236.118.105:1776
node bin/flood.js riverhead james ws://192.168.1.11:8090 10
ERROR 10 assert_exception: Assert Exception
itr != _by_name.end(): no method with name 'transfer'
Nice to see these scripts surfacing :).
I'm getting the following error. I can correct on my side but was wondering if perhaps a commit is missing?Code: [Select]node bin/flood.js riverhead james ws://192.168.1.11:8090 10
ERROR 10 assert_exception: Assert Exception
itr != _by_name.end(): no method with name 'transfer'
./cli_wallet -w wallet.json --chain-id 0f8b631d7a9dfebf16d6776fab96b629a14429762bf9c3eb95db1e4e4af637a4 -s ws://192.168.1.11:8090 -r 192.168.1.11:8099
node bin/flood.js riverhead james ws://192.168.1.11:8099 10
You have to point to the wallet-url and not the witness url.
You have to point to the wallet-url and not the witness url.
/facepalm of course. Still on my first cup of coffee :P.
I am already at the second one :P
Tell me if you are able to spam
+5%
I am already at the second one :P
Tell me if you are able to spam
Works great!!
+5%
I am already at the second one :P
Tell me if you are able to spam
Works great!!
Can it be expanded to do a transaction mix? Maybe transfer, market order, order cancel, etc?
I got the default witness to run and the cli wallet runs too, and I created a password.. however, I'm not able to create/register an account.
Can you please give instructions how to register an account on the blockchain to get the 1000 CORE in the cli wallet?
What I've noticed is that the GUI becomes totally unresponsive when these transaction spamming events happen.
Especially this page:
https://graphene.bitshares.org/#/explorer/blocks
It referrers to both Firefox and Chrome (on Windows 7).
Has anyone noticed similar effect?
What I've noticed is that the GUI becomes totally unresponsive when these transaction spamming events happen.
Especially this page:
https://graphene.bitshares.org/#/explorer/blocks
It referrers to both Firefox and Chrome (on Windows 7).
Has anyone noticed similar effect?
What I've noticed is that the GUI becomes totally unresponsive when these transaction spamming events happen.
Especially this page:
https://graphene.bitshares.org/#/explorer/blocks
It referrers to both Firefox and Chrome (on Windows 7).
Has anyone noticed similar effect?
Me too
What is the best practice for not missing any block during an update?
The update_witness command in CLI wallet allows you to change your witness's block signing key. This architecture allows several different downtime-free update procedures according to your specific hosting situation:
...
Thanks theoretical! Great work!
What I've noticed is that the GUI becomes totally unresponsive when these transaction spamming events happen.
Especially this page:
https://graphene.bitshares.org/#/explorer/blocks
It referrers to both Firefox and Chrome (on Windows 7).
Has anyone noticed similar effect?
Me too
I have noticed that it drags a bit. I have some ideas on improving performance.
Before I raise an issue, on the UI spamming to myself supertest1 the transactions appear as 1 hour ago. Is this a known feature?
It also appears the last block was produced 1 hour ago.
Note: I have move from hotel network this morning, to using a mobile hot spot to check graphene. Will this in anyway affect the datetime ?
Yes, my UI is frozen on explorer blocks (Chrome), I'm currently sending 20 at a time.
Yes, my UI is frozen on explorer blocks (Chrome), I'm currently sending 20 at a time.
Yea I've noticed this too today while you guys were spamming :) I managed to get it a little better but haven't pushed those changes yet, I'll have to look into it in more detail once I get the time, maybe later today or if not tomorrow.
I used the command save_wallet_file test3.json in the cli_wallet and tried to restore it in the UI but it says invalid format. Is there a different command I should be using in the CLI or is restore not implemented yet?
Just want to make sure it's not PEBKAC before I log an issue.
PEBKAC?
The CLI wallet format is not compatible with the GUI import format. There is not currently an easy way to migrate from one wallet to the other.
I cannot register a new account in the GUI, it's always hanging at this screen after entering username+password:
I deleted all cookie and local storage before..
What to do? Is something crashed?
http://imgur.com/HtDO4bA
Also the webserver seems kind of slow, looks overloaded to me..
Double Signing attack test
Since I can't find more information about how double signing will harm the network and what defensive mechanism network will have against double signing, I am going to perform a 30 minutes double signing attack on test net. Double signing could happen due to honest witness mis-configuring node, witness server compromised by evil third party or corrupted witness.
I have 3 witness accounts voted in that I can coordinate, I am going to try double signing from 1, 2, 3 witnesses using 2 separate servers (US, Asia).
Here is the plan:
The attack will last for 30 minutes, 3 phases, each will last for 10 minutes.
phase I (0-10min): 1 double signing witness (boombastic)
phase II (10-20min): 2 double signing witness (boombastic and mr.agsexplorer)
phase III (20-30min): 3 double signing witness (boombastic and mr.agsexplorer and mrs.agsexplorer)
end (30min): stop double signing
Double signing will cause fork, I want to see after all this mess, if network can recover, and I will do some transactions during the attack and see how it's gonna affect normal user operations. If you are here, you can perform normal operation and report back after the attack is finished.
bytemaster, if you see this, please do not vote out these double signing witness just yet. In real world, if some witness starts double signing, voters might not react that quickly, network should survive on its own for at least 30 minutes.
The attack will start at today 2015-09-23 18:00 (UTC), which is 20 mintues from now
I found no difference with my node during the double signing attack :)100% participation during the attack :)
All right, I stopped the double signing attack just now. First of all, I did do that. :) But it feels that didn't happen. I sent various transactions to account hoping to see some balance missing, etc, but in vain. The network seems totally immune to double signing.
Is that safe to say that 'evil' double signing in BTS1 era now goes into witness's toolbox. We can use two machines as redundancy to prevent accident server shutdown?
All right, I stopped the double signing attack just now. First of all, I did do that. :) But it feels that didn't happen. I sent various transactions to account hoping to see some balance missing, etc, but in vain. The network seems totally immune to double signing.If each witness double signs, fork never gets resolved i guess?
Is that safe to say that 'evil' double signing in BTS1 era now goes into witness's toolbox. We can use two machines as redundancy to prevent accident server shutdown?
delegate-1.lafona what happened to your node?
2813283ms th_a fork_database.cpp:57 push_block ] Pushing block to fork database that failed to link: 00020081f8a1dd2c575919181724f39ca247d8a1, 131201
2813283ms th_a fork_database.cpp:58 push_block ] Head: 131108, 00020024713817f57183d045cdecaf3303acbf0d
2813283ms th_a application.cpp:415 handle_block ] Error when pushing block:
3080000 unlinkable_block_exception: unlinkable block
block does not link to known chain
{}
th_a fork_database.cpp:79 _push_block
{"new_block":{"previous":"00020080759ee5c5fc93983b99af426e9d3fcf36","timestamp":"2015-09-23T15:02:12","witness":"1.6.7","transaction_merkle_root":"0000000000000000000000000000000000000000","extensions":[],"witness_signature":"1f65d50fe53a1c376657599d29910bf650086d48a4642b43f5b654d083ef6052a30997ecdac3dab252a2e0e83f9294363a9faacccec6f2852e94643ce22143a689","transactions":[]}}
th_a db_block.cpp:195 _push_block
jtml - what kind of difficulties are you having at the moment, you have missed quite a few blocks recently.
witness_node: /mon/g/graphene/libraries/fc/src/thread/thread_d.hpp:370: bool fc::thread_d::start_next_fiber(bool): Assertion `std::current_exception() == std::exception_ptr()' failed.
Unable to create directories object_database/0
{"path":"object_database/0","inner":"Dynamic exception type: boost::filesystem::filesystem_error\nstd::exception::what: boost::filesystem::create_directories: Permission denied: \"object_database\"\n"}
For redundancy I think I will run multiple nodes with different signing keys. I will then set up a single node to switch signing keys when my witness misses a block.
What is the best practice for not missing any block during an update?
The update_witness command in CLI wallet allows you to change your witness's block signing key. This architecture allows several different downtime-free update procedures according to your specific hosting situation:
...
Thanks theoretical! Great work!
+5% indeed, very useful and sounds like a fool-proof way to do it with no risk of signing double blocks.
One question: how long does it take for the signing key to be updated after you call the update_witness command? As soon as the transaction is processed by the network? After the next maintenance interval? Sth else?
I seem to recall that in BitShares 0.9.x changing the signing key did take a bit of time to go into effect, so that set_block_production true/false was the preferred way. If in graphene the call to changing the signing key takes effect immediately, then that's really nice as it's a much better way of doing it.
What is the best practice for not missing any block during an update?
The update_witness command in CLI wallet allows you to change your witness's block signing key. This architecture allows several different downtime-free update procedures according to your specific hosting situation:
...
Thanks theoretical! Great work!
+5% indeed, very useful and sounds like a fool-proof way to do it with no risk of signing double blocks.
One question: how long does it take for the signing key to be updated after you call the update_witness command? As soon as the transaction is processed by the network? After the next maintenance interval? Sth else?
I seem to recall that in BitShares 0.9.x changing the signing key did take a bit of time to go into effect, so that set_block_production true/false was the preferred way. If in graphene the call to changing the signing key takes effect immediately, then that's really nice as it's a much better way of doing it.
bump question @theoretical @bytemaster
Where is the best place to report bugs in the GUI?
For those brave witnesses, I made a patch based on xeroc's price feed script. Enjoy it!Script updated. Fixed BTC precision issue.
https://github.com/abitmore/python-graphenelib
https://github.com/abitmore/python-graphenelib/blob/master/scripts/pricefeeds.py
For the last 12 hours I've been unable to access the GUI on https://graphene.bitshares.org (https://graphene.bitshares.org)mee too
The browser constantly shows the message "waiting for graphene.bitshares.org".
For the last 12 hours I've been unable to access the GUI on https://graphene.bitshares.org (https://graphene.bitshares.org)mee too
The browser constantly shows the message "waiting for graphene.bitshares.org".
{
"id": "1.6.1531",
"witness_account": "1.2.22396",
"last_aslot": 138787,
"signing_key": "GPH5DCL5nbhL13sXBh1mwQp5pUBSw7rmwjWeiiy5b2Z2UxuYf8spU",
"pay_vb": "1.13.57",
"vote_id": "1:1530",
"total_votes": "9524676529409",
"url": ""
}
Every hour, only votes of (new) active witnesses will be updated.
On a side note, the delegate seems to be out of the list of active witnesses, but has enough votes to be middle of the pack. Any thoughts?Code: [Select]{
"id": "1.6.1531",
"witness_account": "1.2.22396",
"last_aslot": 138787,
"signing_key": "GPH5DCL5nbhL13sXBh1mwQp5pUBSw7rmwjWeiiy5b2Z2UxuYf8spU",
"pay_vb": "1.13.57",
"vote_id": "1:1530",
"total_votes": "9524676529409",
"url": ""
}
For the last 12 hours I've been unable to access the GUI on https://graphene.bitshares.org (https://graphene.bitshares.org)mee too
The browser constantly shows the message "waiting for graphene.bitshares.org".
I just told Valentine, he'll look into it.
For the last 12 hours I've been unable to access the GUI on https://graphene.bitshares.org (https://graphene.bitshares.org)mee too
The browser constantly shows the message "waiting for graphene.bitshares.org".
I just told Valentine, he'll look into it.
Could we have a separate thread similar to this one but dedicated to the GUI available on https://graphene.bitshares.org (https://graphene.bitshares.org) ?
I guess there will be more and more strictly GUI-related issues.
Every hour, only votes of (new) active witnesses will be updated.
On a side note, the delegate seems to be out of the list of active witnesses, but has enough votes to be middle of the pack. Any thoughts?Code: [Select]{
"id": "1.6.1531",
"witness_account": "1.2.22396",
"last_aslot": 138787,
"signing_key": "GPH5DCL5nbhL13sXBh1mwQp5pUBSw7rmwjWeiiy5b2Z2UxuYf8spU",
"pay_vb": "1.13.57",
"vote_id": "1:1530",
"total_votes": "9524676529409",
"url": ""
}
Is the total_votes object listed above the correct one I should be looking at to asses the state of my witness(active or standby)?If your witness is already listed in the active witnesses list, yes, the total_votes object for your witness shows a correct number (which is calculated at last maintenance point); else, no, the number may or may not be correct (which is calculated at the time when your witness was still in active witnesses list, or 0 if it's never in the list).
Is the total_votes object listed above the correct one I should be looking at to asses the state of my witness(active or standby)?If your witness is already listed in the active witnesses list, yes, the total_votes object for your witness shows a correct number (which is calculated at last maintenance point); else, no, the number may or may not be correct (which is calculated at the time when your witness was still in active witnesses list, or 0 if it's never in the list).
I don't know if there is an API to get correct 'total_votes' of standby witnesses.
get_object 1.13.46
[{
"id": "1.13.46",
"owner": "1.2.22388",
"balance": {
"amount": 2926266575,
"asset_id": "1.3.0"
},
"policy": [
1,{
"vesting_seconds": 86400,
"start_claim": "1970-01-01T00:00:00",
"coin_seconds_earned": "252743032080000",
"coin_seconds_earned_last_update": "2015-09-24T18:21:15"
}
]
}
]
On my laptop (which does have my owner keys imported into the cli_wallet) I get get_object 1.13.46
[{
"id": "1.13.46",
"owner": "1.2.22388",
"balance": {
"amount": 2850282029,
"asset_id": "1.3.0"
},
"policy": [
1,{
"vesting_seconds": 86400,
"start_claim": "1970-01-01T00:00:00",
"coin_seconds_earned": "246177967305600",
"coin_seconds_earned_last_update": "2015-09-24T18:21:15"
}
]
}
]
Am I missing something?2547247ms th_a application.cpp:388 handle_block ] Got block #165363 with time 2015-09-24T19:42:27 from network with latency of 250 ms from dele-puppy
2550252ms th_a application.cpp:388 handle_block ] Got block #165364 with time 2015-09-24T19:42:30 from network with latency of 255 ms from dele-puppy
Are we on git checkout master or test3c?I'm on master
Witness updated to masterAre we on git checkout master or test3c?I'm on master
Can anyone explain why vesting balance objects would show up differently on different nodes? On my main active witness node (with no keys loaded into the cli_wallet I get...
Is the total_votes object listed above the correct one I should be looking at to asses the state of my witness(active or standby)?If your witness is already listed in the active witnesses list, yes, the total_votes object for your witness shows a correct number (which is calculated at last maintenance point); else, no, the number may or may not be correct (which is calculated at the time when your witness was still in active witnesses list, or 0 if it's never in the list).
I don't know if there is an API to get correct 'total_votes' of standby witnesses.
We don't store vote totals at this point in time.
1) On 1st server blocks are being signed.
Missing some blocks - playing around with update_witness command and using past, present, and new keys.
update_witness riverhead "" "GPH5AB42MtMGrcnjtgSjwSp7T6u79Te3FnGKC5gj7vdUKNQ9hU1AL" true
10 assert_exception: Assert Exception
it != _keys.end():
{}
th_a wallet.cpp:602 get_private_key
{"witness_name":"riverhead","url":"","block_signing_key":"GPH5AB42MtMGrcnjtgSjwSp7T6u79Te3FnGKC5gj7vdUKNQ9hU1AL","broadcast":true}
th_a wallet.cpp:1401 update_witness
unlocked >>>
Is it best practice to git install ntp on the witness node, or does Graphene have a built in time syncing protocol?
Is the total_votes object listed above the correct one I should be looking at to asses the state of my witness(active or standby)?If your witness is already listed in the active witnesses list, yes, the total_votes object for your witness shows a correct number (which is calculated at last maintenance point); else, no, the number may or may not be correct (which is calculated at the time when your witness was still in active witnesses list, or 0 if it's never in the list).
I don't know if there is an API to get correct 'total_votes' of standby witnesses.
We don't store vote totals at this point in time.
Thanks. So the witness was voted out, and once it was no longer active, that value was not updated. Makes sense. Would someone be willing to vote my witness back in? :)
delegate-1.lafona
1) On 1st server blocks are being signed.
Missing some blocks - playing around with update_witness command and using past, present, and new keys.
1) On 2nd server I launched the witness_node with new keys.
2) On 2nd server I started a client with a new wallet
4) On 2nd server I import_key "riverhead" "privatekeyhere" and it returns "false"
5) On 2nd server if I type "list_my_accounts" I see riverhead but it has the key from server 1 (the active witness key)
6) On 2nd server if I type "dump_private_keys" I see the pub/priv key pair from step 4
7) On 2nd server type: update_witness "riverhead" "publickeyfromstep 4" which returns:Code: [Select]update_witness riverhead "" "GPH5AB42MtMGrcnjtgSjwSp7T6u79Te3FnGKC5gj7vdUKNQ9hU1AL" true
10 assert_exception: Assert Exception
it != _keys.end():
{}
th_a wallet.cpp:602 get_private_key
{"witness_name":"riverhead","url":"","block_signing_key":"GPH5AB42MtMGrcnjtgSjwSp7T6u79Te3FnGKC5gj7vdUKNQ9hU1AL","broadcast":true}
th_a wallet.cpp:1401 update_witness
unlocked >>>
Dumping private keys on the 2nd server returns the pub/priv key for the U1AL pair.
You have to point to the wallet-url and not the witness url.
/facepalm of course. Still on my first cup of coffee :P.
update_witness
and changing and updating servers.### change/update witness server and signing key ###
# on synced node with open wallet
suggest_brain_key
{
"brain_priv_key": "brain priv key brain priv key brain priv key brain priv key brain priv key brain priv key brain priv key brain priv key",
"wif_priv_key": "wif_priv_keywif_priv_keywif_priv_keywif_priv_key",
"pub_key": "pub_keypub_keypub_keypub_keypub_keypub_keypub_key"
}
import_key "delegate.ihashfury" "wif_priv_keywif_priv_keywif_priv_keywif_priv_key" true
# check new keys
dump_private_keys
# build and setup new witness server
# edit config.ini in data folder - add keys
witness_node -d testNet3 --resync-blockchain #--replay-blockchain
# wait untill new server is synced
#on synced node with open wallet
# update_witness(string witness_name, string url, string block_signing_key, bool broadcast)
update_witness delegate.ihashfury "http://bit.ly/ihashfury" "pub_keypub_keypub_keypub_keypub_keypub_keypub_key" true
I strip and copy cli_wallet and witness_node to ~/bin (easy to use different data folders)import_key "delegate.ihashfury" "wif_priv_keywif_priv_keywif_priv_keywif_priv_key" true
Is it best practice to git install ntp on the witness node, or does Graphene have a built in time syncing protocol?
It is best practice to have NTP or PTP installed... to get to 1 second blocks we will need PTP installed on all witness nodes using a single source.
Is it best practice to git install ntp on the witness node, or does Graphene have a built in time syncing protocol?
It is best practice to have NTP or PTP installed... to get to 1 second blocks we will need PTP installed on all witness nodes using a single source.
what if this source get screwed/manipulated in future?
155 tps! Good job spartako.
https://graphene.bitshares.org/#/block/187317
155 tps! Good job spartako.
https://graphene.bitshares.org/#/block/187317
Thanks your help spamming with me! +5%
155 tps! Good job spartako.
https://graphene.bitshares.org/#/block/187317
Thanks your help spamming with me! +5%
Btw, my spamming performance is too low; about 10 tps only. Because of I'm using virtual machine in my laptop? is there any tips for well-performed spamming?
node bin/flood.js spartako spartako1 ws://127.0.0.1:8099 400
list_account_balances spartako
725729.97991 CORE
unlocked >>> list_account_balances spartako
list_account_balances spartako
718831.99894 CORE
unlocked >>> list_account_balances spartako
list_account_balances spartako
715724.23278 CORE
unlocked >>> list_account_balances spartako
list_account_balances spartako
710737.10803 CORE
unlocked >>> list_account_balances spartako
list_account_balances spartako
705602.99434 CORE
unlocked >>> list_account_balances spartako
list_account_balances spartako
724512.07155 CORE
155 tps! Good job spartako.
https://graphene.bitshares.org/#/block/187317
Thanks your help spamming with me! +5%
Btw, my spamming performance is too low; about 10 tps only. Because of I'm using virtual machine in my laptop? is there any tips for well-performed spamming?
I don't know why, probably depend on power of machine, I'm on digital ocean with 16G, 8 core (anyway is a virtual machine).
What I see if I try to push too many transaction I see that transactions are not broadcasted so I start a command like thisCode: [Select]node bin/flood.js spartako spartako1 ws://127.0.0.1:8099 400
But after 1000/1200 tx I kill the program (C-c) and usually I have spikes similar when I obtained 155 tps.
If I continue to spam at this rate it seems that transactions are not broadcasted.
The balance goes down and after a while goes up, example:Code: [Select]list_account_balances spartako
725729.97991 CORE
unlocked >>> list_account_balances spartako
list_account_balances spartako
718831.99894 CORE
unlocked >>> list_account_balances spartako
list_account_balances spartako
715724.23278 CORE
unlocked >>> list_account_balances spartako
list_account_balances spartako
710737.10803 CORE
unlocked >>> list_account_balances spartako
list_account_balances spartako
705602.99434 CORE
unlocked >>> list_account_balances spartako
list_account_balances spartako
724512.07155 CORE
Can it be an issue or considered just as node performance problem?
I suspect what is happening is the other peers are forgetting about the inventory notification and it is never being rebroadcast so it sits in the local node's cache until it expires.
Only an issue during spamming
Sent from my iPhone using Tapatalk
Once boost is built from source and "b2 install" is run on the target VPS can the boost/boost_1_57_0 folder hierarchy be removed?
Does the witness_node and cli_wallet binaries require anything from the boost tree or only from the shared libraries (/usr/local/lib/libboost*.so.1.57.0)? What about the gui out of curiosity?
linux-vdso.so.1 => (0x00007ffe627f7000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f00b79ad000)
libdl.so.2 => /lib64/libdl.so.2 (0x00007f00b77a8000)
librt.so.1 => /lib64/librt.so.1 (0x00007f00b75a0000)
libstdc++.so.6 => /lib64/libstdc++.so.6 (0x00007f00b7291000)
libm.so.6 => /lib64/libm.so.6 (0x00007f00b6f88000)
libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00007f00b6d71000)
libc.so.6 => /lib64/libc.so.6 (0x00007f00b69b4000)
/lib64/ld-linux-x86-64.so.2 (0x000055d7ecbf4000)
Once boost is built from source and "b2 install" is run on the target VPS can the boost/boost_1_57_0 folder hierarchy be removed?
Does the witness_node and cli_wallet binaries require anything from the boost tree or only from the shared libraries (/usr/local/lib/libboost*.so.1.57.0)? What about the gui out of curiosity?
ldd witness_node showsCode: [Select]linux-vdso.so.1 => (0x00007ffe627f7000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f00b79ad000)
libdl.so.2 => /lib64/libdl.so.2 (0x00007f00b77a8000)
librt.so.1 => /lib64/librt.so.1 (0x00007f00b75a0000)
libstdc++.so.6 => /lib64/libstdc++.so.6 (0x00007f00b7291000)
libm.so.6 => /lib64/libm.so.6 (0x00007f00b6f88000)
libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00007f00b6d71000)
libc.so.6 => /lib64/libc.so.6 (0x00007f00b69b4000)
/lib64/ld-linux-x86-64.so.2 (0x000055d7ecbf4000)
that witness_node is statically linked. Shared boost libs are not needed on the production host.
I would asume for the build it would be only needed to have the cpp headers and static *.a
boost libs from install dir to bulid a witness.
Once boost is built from source and "b2 install" is run on the target VPS can the boost/boost_1_57_0 folder hierarchy be removed?
Does the witness_node and cli_wallet binaries require anything from the boost tree or only from the shared libraries (/usr/local/lib/libboost*.so.1.57.0)? What about the gui out of curiosity?
ldd witness_node showsCode: [Select]linux-vdso.so.1 => (0x00007ffe627f7000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f00b79ad000)
libdl.so.2 => /lib64/libdl.so.2 (0x00007f00b77a8000)
librt.so.1 => /lib64/librt.so.1 (0x00007f00b75a0000)
libstdc++.so.6 => /lib64/libstdc++.so.6 (0x00007f00b7291000)
libm.so.6 => /lib64/libm.so.6 (0x00007f00b6f88000)
libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00007f00b6d71000)
libc.so.6 => /lib64/libc.so.6 (0x00007f00b69b4000)
/lib64/ld-linux-x86-64.so.2 (0x000055d7ecbf4000)
that witness_node is statically linked. Shared boost libs are not needed on the production host.
I would asume for the build it would be only needed to have the cpp headers and static *.a
boost libs from install dir to bulid a witness.
Thanks for the reply jtme, I'm a bit rusty with my unix dev skills. Is that list definitive in terms of the boost libs required by graphene, and what tool did you use to get that list?
From what I can see boost is so big almost nobody uses it all so it gets chopped up and pulled apart to make use of various elements, if they want a static build. Static builds are great for removing external dependencies, which really makes sense for building blockchain apps & tools, but it can make for some mighty big executable images.
One last but important question, this may only be known by cryptonomex devs: can boost 1.57.0 libs be installed on a host running the 0.9.x code without any collisions? If 0.9.x was also built with static linking I presume the answer to that is yes, they can coexist without an issue. I'd like a solid confirmation tho before I run my graphene VPS setup script on my production delegate VPS.
Hello all,
I've been lurking here since mid-2014 and I've decided to become more active.
Please vote for my new witness node: mindphlux-witness on testnet.
I've already talked to bytemaster, I will learn ReactJS and help out the frontend development team. Web development is my main job, very experienced in plain JS, so it should be an easy task.
Thank you
Best Regards
mindphlux
Hello all,
I've been lurking here since mid-2014 and I've decided to become more active.
Please vote for my new witness node: mindphlux-witness on testnet.
I've already talked to bytemaster, I will learn ReactJS and help out the frontend development team. Web development is my main job, very experienced in plain JS, so it should be an easy task.
Thank you
Best Regards
mindphlux
Awesome, welcome! :)
Hello all,
I've been lurking here since mid-2014 and I've decided to become more active.
Please vote for my new witness node: mindphlux-witness on testnet.
I've already talked to bytemaster, I will learn ReactJS and help out the frontend development team. Web development is my main job, very experienced in plain JS, so it should be an easy task.
Thank you
Best Regards
mindphlux
I set up 51 VPSs from three different regions around the world. If I did correctly, they will spam the network at the same time at 2015/9/28 0:02 AM PST (7:02 UTC)wow ..
I set up 51 VPSs from three different regions around the world. If I did correctly, they will spam the network at the same time at 2015/9/28 0:02 AM PST (7:02 UTC)unbeliveable, how could you manage so many VPS?
I set up 51 VPSs from three different regions around the world. If I did correctly, they will spam the network at the same time at 2015/9/28 0:02 AM PST (7:02 UTC)unbeliveable, how could you manage so many VPS?
I set up 51 VPSs from three different regions around the world. If I did correctly, they will spam the network at the same time at 2015/9/28 0:02 AM PST (7:02 UTC)
top - 07:20:31 up 208 days, 10:26, 1 user, load average: 1.23, 1.21, 0.96
Threads: 205 total, 3 running, 198 sleeping, 4 stopped, 0 zombie
%Cpu(s): 76.4 us, 20.2 sy, 0.0 ni, 1.0 id, 1.5 wa, 0.0 hi, 1.0 si, 0.0 st
KiB Mem: 2042528 total, 2022388 used, 20140 free, 1756 buffers
KiB Swap: 4227064 total, 1671884 used, 2555180 free. 1253948 cached Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
20771 alt 20 0 988692 190716 6656 S 0.0 9.3 8:44.70 witness_node
20774 alt 20 0 988692 190716 6656 S 0.0 9.3 0:00.35 ntp
20775 alt 20 0 988692 190716 6656 S 0.0 9.3 7:36.21 asio
20776 alt 20 0 988692 190716 6656 S 0.0 9.3 0:00.08 ntp
20777 alt 20 0 988692 190716 6656 R 99.1 9.3 358:36.98 p2p
block 262244: 1715 txs
thread p2p have 100% CPU usage, can we improve it more?Code: [Select]top - 07:20:31 up 208 days, 10:26, 1 user, load average: 1.23, 1.21, 0.96
Threads: 205 total, 3 running, 198 sleeping, 4 stopped, 0 zombie
%Cpu(s): 76.4 us, 20.2 sy, 0.0 ni, 1.0 id, 1.5 wa, 0.0 hi, 1.0 si, 0.0 st
KiB Mem: 2042528 total, 2022388 used, 20140 free, 1756 buffers
KiB Swap: 4227064 total, 1671884 used, 2555180 free. 1253948 cached Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
20771 alt 20 0 988692 190716 6656 S 0.0 9.3 8:44.70 witness_node
20774 alt 20 0 988692 190716 6656 S 0.0 9.3 0:00.35 ntp
20775 alt 20 0 988692 190716 6656 S 0.0 9.3 7:36.21 asio
20776 alt 20 0 988692 190716 6656 S 0.0 9.3 0:00.08 ntp
20777 alt 20 0 988692 190716 6656 R 99.1 9.3 358:36.98 p2p
Test is done
Duration: 30 min
Total Transactions: approximately 40,000 txs
Max TPS: 435.7 (block #262215)
Participation rate before testing: 97%
Participation rate after testing: 79%
Test is done
Duration: 30 min
Total Transactions: approximately 40,000 txs
Max TPS: 435.7 (block #262215)
Participation rate before testing: 97%
Participation rate after testing: 79%
Assuming similar delegate configuration at launch, cost of getting 1/5 of delegates down is ~5.2k $
https://graphene.bitshares.org/#/block/262215
1307 txs = 435.7 tps
https://graphene.bitshares.org/#/block/262215
1307 txs = 435.7 tps
Wow!! Great work!!
Test is done
Duration: 30 min
Total Transactions: approximately 40,000 txs
Max TPS: 435.7 (block #262215)
Participation rate before testing: 94%
Participation rate after testing: 79%
"id": "1.6.5252",
"witness_account": "1.2.7109",
"last_aslot": 214677,
"signing_key": "GPH6saDVJCEaHreyMhp4yumANMyyr73b8cYg3PCyWxrnufop2NwXs",
"pay_vb": "1.13.50",
"vote_id": "1:5275",
"total_votes": "9563612188418",
"url": "",
"total_missed": 1379
}
unlocked >>> get_witness betaxtrade
get_witness betaxtrade
{
"id": "1.6.5252",
"witness_account": "1.2.7109",
"last_aslot": 214677,
"signing_key": "GPH6saDVJCEaHreyMhp4yumANMyyr73b8cYg3PCyWxrnufop2NwXs",
"pay_vb": "1.13.50",
"vote_id": "1:5275",
"total_votes": "9563612188418",
"url": "",
"total_missed": 1388
}
unlocked >>> get_witness betaxtrade
get_witness betaxtrade
{
"id": "1.6.5252",
"witness_account": "1.2.7109",
"last_aslot": 214677,
"signing_key": "GPH6saDVJCEaHreyMhp4yumANMyyr73b8cYg3PCyWxrnufop2NwXs",
"pay_vb": "1.13.50",
"vote_id": "1:5275",
"total_votes": "9563612188418",
"url": "",
"total_missed": 1388
}
unlocked >>> get_witness betaxtrade
get_witness betaxtrade
{
"id": "1.6.5252",
"witness_account": "1.2.7109",
"last_aslot": 214677,
"signing_key": "GPH6saDVJCEaHreyMhp4yumANMyyr73b8cYg3PCyWxrnufop2NwXs",
"pay_vb": "1.13.50",
"vote_id": "1:5275",
"total_votes": "9563612188418",
"url": "",
"total_missed": 1397
}
unlocked >>> get_witness betaxtrade
get_witness betaxtrade
{
"id": "1.6.5252",
"witness_account": "1.2.7109",
"last_aslot": 274302,
"signing_key": "GPH6saDVJCEaHreyMhp4yumANMyyr73b8cYg3PCyWxrnufop2NwXs",
"pay_vb": "1.13.50",
"vote_id": "1:5275",
"total_votes": "9524011927872",
"url": "",
"total_missed": 1806
}
15:48 from network with latency of 320 ms from init2
951317ms th_a application.cpp:388 handle_block ] Got block #248796 with time 2015-09-27T19:15:51 from network with latency of 318 ms from init3
954311ms th_a application.cpp:388 handle_block ] Got block #248797 with time 2015-09-27T19:15:54 from network with latency of 312 ms from init6
957353ms th_a application.cpp:388 handle_block ] Got block #248798 with time 2015-09-27T19:15:57 from network with latency of 354 ms from maqifrnswa
960544ms th_a application.cpp:388 handle_block ] Got block #248799 with time 2015-09-27T19:16:00 from network with latency of 546 ms from delegate-dev2.btsnow
960869ms th_a application.cpp:518 get_item ] Serving up block #248799
961012ms th_a application.cpp:432 handle_transaction ] Got transaction from network
961232ms th_a application.cpp:432 handle_transaction ] Got transaction from network
witness_node: /home/calabiyau/graphene/libraries/fc/src/thread/thread_d.hpp:370: bool fc::thread_d::start_next_fiber(bool): Assertion `std::current_exception() == std::exception_ptr()' failed.
Aborted (core dumped)
after running stable for days, witness didn`t survive the storm :(
(http://i.imgur.com/2fKiIB6.png)
wow....
The 3 second blocks going to be the standard or is there a planned change later with a hard fork?3 secs initially .. then reducing it if shareholders approve
AFAIK cost per transaction will be 20 BTS at the beginning. So in the real network, transaction cost will be 20 x 40k = 800k BTSTest is done
Duration: 30 min
Total Transactions: approximately 40,000 txs
Max TPS: 435.7 (block #262215)
Participation rate before testing: 97%
Participation rate after testing: 79%
Assuming similar delegate configuration at launch, cost of getting 1/5 of delegates down is ~5.2k $
I spent 100k CORE. (But will withdraw 75k of them as a vesting)
(http://i.imgur.com/2fKiIB6.png)
wow....
https://graphene.bitshares.org/#/block/262215
1307 txs = 435.7 tps
Any chance that we could get an estimate of the number of broadcast transactions that expire before being included in the blockchain?
268186 '1.6.4' 'init3' 15
268185 '1.6.11' 'init10' 9
268184 '1.6.1527' 'dele-puppy' 5
268183 '1.6.2' 'init1' 82
268182 '1.6.2104' 'fox' 7
268181 '1.6.3' 'init2' 1
268180 '1.6.5252' 'betaxtrade' 53
268179 '1.6.7' 'init6' 0
268178 '1.6.1' 'init0' 52
268177 '1.6.9' 'init8' 11
268326 '1.6.1' 'init0' 67
268325 '1.6.1531' 'delegate-1.lafona' 52
268324 '1.6.11' 'init10' 11
268323 '1.6.1531' 'delegate-1.lafona' 32
268322 '1.6.12' 'init11' 3
268321 '1.6.3356' 'mr.agsexplorer' 51
268320 '1.6.3360' 'mrs.agsexplorer' 119
268319 '1.6.4949' 'xeldal' 65
268375 '1.6.3360' 'mrs.agsexplorer' 110
268374 '1.6.11' 'init10' 3
268373 '1.6.3356' 'mr.agsexplorer' 201
268372 '1.6.6' 'init5' 36
268371 '1.6.1569' 'delegate.baozi' 5
268370 '1.6.3184' 'maqifrnswa' 75
268369 '1.6.7' 'init6' 0
268368 '1.6.3968' 'riverhead' 121
268367 '1.6.5' 'init4' 24
268366 '1.6.4' 'init3' 10
268365 '1.6.5252' 'betaxtrade' 15
The 3 second blocks going to be the standard or is there a planned change later with a hard fork?
2015-09-28T07:05:33 th_a:invoke handle_block handle_block ] Got block #262123 with time 2015-09-28T07:05:30 from network withlatency of 3618 ms from init11 application.cpp:388
2015-09-28T07:05:38 th_a:invoke handle_transaction handle_transaction ] Got transaction from network application.cpp:432
After that the witness node received nothing from network, but recursively trying to generate block by itself.AFAIK cost per transaction will be 20 BTS at the beginning. So in the real network, transaction cost will be 20 x 40k = 800k BTSTest is done
Duration: 30 min
Total Transactions: approximately 40,000 txs
Max TPS: 435.7 (block #262215)
Participation rate before testing: 97%
Participation rate after testing: 79%
Assuming similar delegate configuration at launch, cost of getting 1/5 of delegates down is ~5.2k $
I spent 100k CORE. (But will withdraw 75k of them as a vesting)
AFAIK cost per transaction will be 20 BTS at the beginning. So in the real network, transaction cost will be 20 x 40k = 800k BTSTest is done
Duration: 30 min
Total Transactions: approximately 40,000 txs
Max TPS: 435.7 (block #262215)
Participation rate before testing: 97%
Participation rate after testing: 79%
Assuming similar delegate configuration at launch, cost of getting 1/5 of delegates down is ~5.2k $
I spent 100k CORE. (But will withdraw 75k of them as a vesting)
Selling asaet fee is only 2.5 CORE.
AFAIK cost per transaction will be 20 BTS at the beginning. So in the real network, transaction cost will be 20 x 40k = 800k BTSTest is done
Duration: 30 min
Total Transactions: approximately 40,000 txs
Max TPS: 435.7 (block #262215)
Participation rate before testing: 97%
Participation rate after testing: 79%
Assuming similar delegate configuration at launch, cost of getting 1/5 of delegates down is ~5.2k $
I spent 100k CORE. (But will withdraw 75k of them as a vesting)
Selling asaet fee is only 2.5 CORE.
So you can basically hang the network with a couple thousand dollars for a couple hours maybe? I am trying to understand if this is a valid attack vector.
Actually the max tps was 571 from block 262244. And the participation rate during the test was about 50%
I'm planning second stress test with 101 VPSs 10 txs per second each.
But should I wait for a network protocol fix (which can handle spam transactions more effectively)?
I'm planning second stress test with 101 VPSs 10 txs per second each.
But should I wait for a network protocol fix (which can handle spam transactions more effectively)?
Probably.
I will flood the testnet at 9/29 0:10 UTC with 101 VPSs. Tips for VPS costs are more than welcome. ;)
BTS ID: clayop
Due to the limit of numbers per region, I can only manage about 60 VPSs.
I will flood the testnet at 9/29 0:10 UTC with 101 VPSs. Tips for VPS costs are more than welcome. ;)
BTS ID: clayop
Due to the limit of numbers per region, I can only manage about 60 VPSs.
Sent you 500BTS for to help with your costs ;D
2015-09-28T20:08:30 p2p:message read_loop process_block_during ] received a block from peer 216.252.204.69:54183, passing it to client node.cpp:3232
2015-09-28T20:08:30 p2p:message read_loop process_block_during ] Successfully pushed block 275994 (id:0004361a7b4321512a38bc112c8c8ecd84d3e59c) node.cpp:3254
....................
2015-09-28T20:08:36 p2p:message read_loop process_block_during ] received a block from peer 114.92.254.159:62015, passing it to client node.cpp:3232
.........................
2015-09-28T20:08:36 p2p:message read_loop process_block_during ] Failed to push block 275996 (id:0004361c004527fceb33c4dfe9062a92a6421508), client rejected block sent by peer node.cpp:3346
2015-09-28T20:08:36 p2p:message read_loop process_block_during ] disconnecting client 114.92.254.159:62015 because it offered us the rejected block node.cpp:3368
...............
2015-09-28T20:08:37 p2p:message read_loop process_block_during ] received a block from peer 185.42.242.124:40060, passing it to client node.cpp:3232
2015-09-28T20:08:37 p2p:message read_loop process_block_during ] Failed to push block 275996 (id:0004361c004527fceb33c4dfe9062a92a6421508), client rejected block sent by peer node.cpp:3346
2015-09-28T20:08:37 p2p:message read_loop process_block_during ] disconnecting client 185.42.242.124:40060 because it offered us the rejected block node.cpp:3368
..............
2015-09-28T20:08:39 p2p:message read_loop process_block_during ] received a block from peer 178.62.88.151:46944, passing it to client node.cpp:3232
2015-09-28T20:08:39 p2p:message read_loop process_block_during ] Successfully pushed block 275996 (id:0004361c4c4b8410260663693fd59b7ca1977b86) node.cpp:3254
2015-09-28T20:08:39 p2p:message read_loop process_block_during ] client validated the block, advertising it to other peers node.cpp:3278
..............
2015-09-28T20:08:42 p2p:message read_loop process_block_during ] received a block from peer 178.62.88.151:46944, passing it to client node.cpp:3232
2015-09-28T20:08:42 p2p:message read_loop process_block_during ] Peer 178.62.88.151:46944 sent me a block that didn't link to our blockchain. Restarting sync mode wi th them to get the missing block. Error pushing block was: {"code":90006,"name":"unlinkable_block_exception","message":"unlinkable block","stack":[{"context":{"level" :"error","file":"application.cpp","line":417,"method":"handle_block","hostname":"","thread_name":"th_a","timestamp":"2015-09-28T20:08:42"},"format":"Error when pushin g block:\n${e}","data":{"e":"3080000 unlinkable_block_exception: unlinkable block\nblock does not link to known chain\n {}\n th_a fork_database.cpp:79 _push_bl ock\n\n {\"new_block\":{\"previous\":\"0004361c004527fceb33c4dfe9062a92a6421508\",\"timestamp\":\"2015-09-28T20:08:42\",\"witness\":\"1.6.5248\",\"transaction_merk le_root\":\"0000000000000000000000000000000000000000\",\"extensions\":[],\"witness_signature\":\"1f051e73526e48fd46a4359daaede6e5e5a8b6f451194c24caad9a1287506ef0cd026 c0ecdadb5ea35b9843dc0bd68faee59053278a7ce6e571f838a2d870bebc9\",\"transactions\":[]}}\n th_a db_block.cpp:195 _push_block"}},{"context":{"level":"warn","file":"ap plication.cpp","line":428,"method":"handle_block","hostname":"","thread_name":"th_a","timestamp":"2015-09-28T20:08:42"},"format":"","data":{"blk_msg":{"block":{"previ ous":"0004361c004527fceb33c4dfe9062a92a6421508","timestamp":"2015-09-28T20:08:42","witness":"1.6.5248","transaction_merkle_root":"000000000000000000000000000000000000 0000","extensions":[],"witness_signature":"1f051e73526e48fd46a4359daaede6e5e5a8b6f451194c24caad9a1287506ef0cd026c0ecdadb5ea35b9843dc0bd68faee59053278a7ce6e571f838a2d8 70bebc9","transactions":[]},"block_id":"0004361d1c4d414512a8ae8e99ecf2026627d08e"},"sync_mode":false}}]} node.cpp:3362
d>2015-09-28T20:08:42 p2p:message read_loop fetch_next_batch_of_ ] sync: sending a request for the next items after 0004361c4c4b8410260663693fd59b7ca1977b86 to peer 178 .62.88.151:46944, (full request is ["0004361158e0898268252af7afa90b1d59ffa354","0004361707141c5119a53184c26912df003d6ac2","0004361a7b4321512a38bc112c8c8ecd84d3e59c"," 0004361c4c4b8410260663693fd59b7ca1977b86"]) node.cpp:2354
............................
2015-09-28T20:08:42 p2p:message read_loop on_blockchain_item_i ] popping item because delegate has already seen it. peer 178.62.88.151:46944's last block the delegat e has seen is now 0004361a7b4321512a38bc112c8c8ecd84d3e59c (actual block #275994) node.cpp:2511
2015-09-28T20:08:42 p2p:message read_loop on_blockchain_item_i ] popping item because delegate has already seen it. peer 178.62.88.151:46944's last block the delegat e has seen is now 0004361bd4f6fdf618a74290b09a8a018ab7867b (actual block #275995) node.cpp:2511
2015-09-28T20:08:42 p2p:message read_loop on_blockchain_item_i ] after removing all items we have already seen, item_hashes_received.size() = 2 node.cpp:2515
2015-09-28T20:08:42 p2p:message read_loop trigger_fetch_sync_i ] Triggering fetch sync items loop now node.cpp:1083
2015-09-28T20:08:42 p2p:fetch_sync_items_loop fetch_sync_items_loo ] beginning another iteration of the sync items loop node.cpp:1022
2015-09-28T20:08:42 p2p:fetch_sync_items_loop request_sync_items_f ] requesting 2 item(s) ["0004361c004527fceb33c4dfe9062a92a6421508","0004361d1c4d414512a8ae8e99ecf20 26627d08e"] from peer 178.62.88.151:46944 node.cpp:1006
2015-09-28T20:08:42 p2p:fetch_sync_items_loop fetch_sync_items_loo ] no sync items to fetch right now, going to sleep node.cpp:1072
2015-09-28T20:08:42 p2p:message read_loop on_message ] handling message fetch_blockchain_item_ids_message_type f9797929aee8529c183c2a43787da7a9b3d24e14 size 85 from peer 23.102.65.247:1984 node.cpp:1684
2015-09-28T20:08:42 p2p:message read_loop on_fetch_blockchain_ ] sync: received a request for item ids after 0004361c4c4b8410260663693fd59b7ca1977b86 from peer 23.102 .65.247:1984 (full request: ["0004361158e0898268252af7afa90b1d59ffa354","0004361707141c5119a53184c26912df003d6ac2","0004361a7b4321512a38bc112c8c8ecd84d3e59c","0004361 c4c4b8410260663693fd59b7ca1977b86"]) node.cpp:2171
2015-09-28T20:08:42 p2p:message read_loop on_fetch_blockchain_ ] reply_message: {"total_remaining_item_count":0,"item_type":1001,"item_hashes_available":["0004361c4c4 b8410260663693fd59b7ca1977b86"]} fetch_blockchain_item_ids_message_received.blockchain_synopsis: ["0004361158e0898268252af7afa90b1d59ffa354","0004361707141c5119a53184 c26912df003d6ac2","0004361a7b4321512a38bc112c8c8ecd84d3e59c","0004361c4c4b8410260663693fd59b7ca1977b86"] node.cpp:2194
2015-09-28T20:08:42 p2p:message read_loop on_fetch_blockchain_ ] sync: peer is already in sync with us node.cpp:2213
2015-09-28T20:08:43 p2p:message read_loop on_message ] handling message fetch_blockchain_item_ids_message_type f9797929aee8529c183c2a43787da7a9b3d24e14 size 85 from peer 207.46.141.218:1344 node.cpp:1684
2015-09-28T20:08:43 p2p:message read_loop on_fetch_blockchain_ ] sync: received a request for item ids after 0004361c4c4b8410260663693fd59b7ca1977b86 from peer 207.46 .141.218:1344 (full request: ["0004361158e0898268252af7afa90b1d59ffa354","0004361707141c5119a53184c26912df003d6ac2","0004361a7b4321512a38bc112c8c8ecd84d3e59c","000436 1c4c4b8410260663693fd59b7ca1977b86"]) node.cpp:2171
2015-09-28T20:08:43 p2p:message read_loop on_fetch_blockchain_ ] reply_message: {"total_remaining_item_count":0,"item_type":1001,"item_hashes_available":["0004361c4c4 b8410260663693fd59b7ca1977b86"]} fetch_blockchain_item_ids_message_received.blockchain_synopsis: ["0004361158e0898268252af7afa90b1d59ffa354","0004361707141c5119a53184 c26912df003d6ac2","0004361a7b4321512a38bc112c8c8ecd84d3e59c","0004361c4c4b8410260663693fd59b7ca1977b86"] node.cpp:2194
2015-09-28T20:08:43 p2p:message read_loop on_fetch_blockchain_ ] sync: peer is already in sync with us node.cpp:2213
2015-09-28T20:08:43 p2p:message read_loop on_message ] handling message block_message_type 1003c4eca3a31fd0e4933c919f37b072876911dc size 133 from peer 71.19 7.2.119:1776 node.cpp:1684
2015-09-28T20:08:43 p2p:message read_loop process_block_during ] received a block from peer 71.197.2.119:1776, passing it to client node.cpp:3232
2015-09-28T20:08:43 p2p:message read_loop process_block_during ] Peer 71.197.2.119:1776 sent me a block that didn't link to our blockchain. Restarting sync mode with them to get the missing block. Error pushing block was: {"code":90006,"name":"unlinkable_block_exception","message":"unlinkable block","stack":[{"context":{"level":" error","file":"application.cpp","line":417,"method":"handle_block","hostname":"","thread_name":"th_a","timestamp":"2015-09-28T20:08:43"},"format":"Error when pushing block:\n${e}","data":{"e":"3080000 unlinkable_block_exception: unlinkable block\nblock does not link to known chain\n {}\n th_a fork_database.cpp:79 _push_bloc k\n\n {\"new_block\":{\"previous\":\"0004361c004527fceb33c4dfe9062a92a6421508\",\"timestamp\":\"2015-09-28T20:08:42\",\"witness\":\"1.6.5248\",\"transaction_merkle _root\":\"0000000000000000000000000000000000000000\",\"extensions\":[],\"witness_signature\":\"1f051e73526e48fd46a4359daaede6e5e5a8b6f451194c24caad9a1287506ef0cd026c0 ecdadb5ea35b9843dc0bd68faee59053278a7ce6e571f838a2d870bebc9\",\"transactions\":[]}}\n th_a db_block.cpp:195 _push_block"}},{"context":{"level":"warn","file":"appl ication.cpp","line":428,"method":"handle_block","hostname":"","thread_name":"th_a","timestamp":"2015-09-28T20:08:43"},"format":"","data":{"blk_msg":{"block":{"previou s":"0004361c004527fceb33c4dfe9062a92a6421508","timestamp":"2015-09-28T20:08:42","witness":"1.6.5248","transaction_merkle_root":"00000000000000000000000000000000000000 00","extensions":[],"witness_signature":"1f051e73526e48fd46a4359daaede6e5e5a8b6f451194c24caad9a1287506ef0cd026c0ecdadb5ea35b9843dc0bd68faee59053278a7ce6e571f838a2d870 bebc9","transactions":[]},"block_id":"0004361d1c4d414512a8ae8e99ecf2026627d08e"},"sync_mode":false}}]} node.cpp:3362
2015-09-28T20:08:43 p2p:message read_loop fetch_next_batch_of_ ] sync: sending a request for the next items after 0004361c4c4b8410260663693fd59b7ca1977b86 to peer 71. 197.2.119:1776, (full request is ["0004361158e0898268252af7afa90b1d59ffa354","0004361707141c5119a53184c26912df003d6ac2","0004361a7b4321512a38bc112c8c8ecd84d3e59c","00 04361c4c4b8410260663693fd59b7ca1977b86"]) node.cpp:2354
..............
2015-09-28T20:08:43 p2p:send_sync_block_to_node_delegate send_sync_block_to_n ] Successfully pushed sync block 275996 (id:0004361c004527fceb33c4dfe9062a92a6421508) node.cpp:2935
..............
2015-09-28T20:08:43 p2p:send_sync_block_to_node_delegate fetch_next_batch_of_ ] sync: sending a request for the next items after 0004361c4c4b8410260663693fd59b7ca1977 b86 to peer 71.197.2.119:61371, (full request is ["0004361158e0898268252af7afa90b1d59ffa354","0004361707141c5119a53184c26912df003d6ac2","0004361a7b4321512a38bc112c8c8 ecd84d3e59c","0004361c4c4b8410260663693fd59b7ca1977b86"]) node.cpp:2354
...............
2015-09-28T20:08:43 p2p:message read_loop on_blockchain_item_i ] popping item because delegate has already seen it. peer 109.73.172.144:43494's last block the delega te has seen is now 0004361a7b4321512a38bc112c8c8ecd84d3e59c (actual block #275994) node.cpp:2511
2015-09-28T20:08:43 p2p:message read_loop on_blockchain_item_i ] popping item because delegate has already seen it. peer 109.73.172.144:43494's last block the delega te has seen is now 0004361bd4f6fdf618a74290b09a8a018ab7867b (actual block #275995) node.cpp:2511
2015-09-28T20:08:43 p2p:message read_loop on_blockchain_item_i ] popping item because delegate has already seen it. peer 109.73.172.144:43494's last block the delega te has seen is now 0004361c004527fceb33c4dfe9062a92a6421508 (actual block #275996) node.cpp:2511
...................
2015-09-28T20:08:43 p2p:message read_loop on_blockchain_item_i ] popping item because delegate has already seen it. peer 188.138.122.20:60328's last block the delega te has seen is now 0004361a7b4321512a38bc112c8c8ecd84d3e59c (actual block #275994) node.cpp:2511
2015-09-28T20:08:43 p2p:message read_loop on_blockchain_item_i ] popping item because delegate has already seen it. peer 188.165.233.53:1776's last block the delegat e has seen is now 0004361a7b4321512a38bc112c8c8ecd84d3e59c (actual block #275994) node.cpp:2511
2015-09-28T20:08:43 p2p:message read_loop on_blockchain_item_i ] popping item because delegate has already seen it. peer 188.138.122.20:60328's last block the delega te has seen is now 0004361bd4f6fdf618a74290b09a8a018ab7867b (actual block #275995) node.cpp:2511
2015-09-28T20:08:43 p2p:message read_loop on_blockchain_item_i ] popping item because delegate has already seen it. peer 188.165.233.53:1776's last block the delegat e has seen is now 0004361bd4f6fdf618a74290b09a8a018ab7867b (actual block #275995) node.cpp:2511
2015-09-28T20:08:43 p2p:message read_loop on_blockchain_item_i ] popping item because delegate has already seen it. peer 188.138.122.20:60328's last block the delega te has seen is now 0004361c004527fceb33c4dfe9062a92a6421508 (actual block #275996) node.cpp:2511
2015-09-28T20:08:43 p2p:message read_loop on_blockchain_item_i ] after removing all items we have already seen, item_hashes_received.size() = 1 node.cpp:2515
2015-09-28T20:08:43 p2p:message read_loop trigger_fetch_sync_i ] Triggering fetch sync items loop now node.cpp:1083
2015-09-28T20:08:43 p2p:fetch_sync_items_loop fetch_sync_items_loo ] beginning another iteration of the sync items loop node.cpp:1022
2015-09-28T20:08:43 p2p:fetch_sync_items_loop fetch_sync_items_loo ] no sync items to fetch right now, going to sleep node.cpp:1072
2015-09-28T20:08:43 p2p:message read_loop on_blockchain_item_i ] popping item because delegate has already seen it. peer 188.165.233.53:1776's last block the delegat e has seen is now 0004361c004527fceb33c4dfe9062a92a6421508 (actual block #275996) node.cpp:2511
.......................
2015-09-28T20:08:43 p2p:fetch_sync_items_loop fetch_sync_items_loo ] beginning another iteration of the sync items loop node.cpp:1022
2015-09-28T20:08:43 p2p:fetch_sync_items_loop fetch_sync_items_loo ] no sync items to fetch right now, going to sleep node.cpp:1072
f>2015-09-28T20:08:43 p2p:send_sync_block_to_node_delegate send_sync_block_to_n ] Failed to push sync block 275997 (id:0004361d1c4d414512a8ae8e99ecf2026627d08e): client rejected sync block sent by peer: {"code":10,"name":"assert_exception","message":"Assert Exception","stack":[{"context":{"level":"error","file":"vesting_balance_eval uator.cpp","line":103,"method":"do_evaluate","hostname":"","thread_name":"th_a","timestamp":"2015-09-28T20:08:43"},"format":"vbo.is_withdraw_allowed( now, op.amount ) : ","data":{"now":"2015-09-28T20:08:33","op":{"fee":{"amount":50000,"asset_id":"1.3.0"},"vesting_balance":"1.13.30","owner":"1.2.22404","amount":{"amount":"6400000000 ","asset_id":"1.3.0"}},"vbo":{"id":"1.13.30","owner":"1.2.22404","balance":{"amount":"6392481409","asset_id":"1.3.0"},"policy":[1,{"vesting_seconds":86400,"start_clai m":"1970-01-01T00:00:00","coin_seconds_earned":"552223993737600","coin_seconds_earned_last_update":"2015-09-28T20:07:51"}]}}},{"context":{"level":"warn","file":"vesti ng_balance_evaluator.cpp","line":109,"method":"do_evaluate","hostname":"","thread_name":"th_a","timestamp":"2015-09-28T20:08:43"},"format":"","data":{"op":{"fee":{"am ount":50000,"asset_id":"1.3.0"},"vesting_balance":"1.13.30","owner":"1.2.22404","amount":{"amount":"6400000000","asset_id":"1.3.0"}}}},{"context":{"level":"warn","fil e":"evaluator.cpp","line":42,"method":"start_evaluate","hostname":"","thread_name":"th_a","timestamp":"2015-09-28T20:08:43"},"format":"","data":{}},{"context":{"level ":"warn","file":"db_block.cpp","line":609,"method":"apply_operation","hostname":"","thread_name":"th_a","timestamp":"2015-09-28T20:08:43"},"format":"","data":{}},{"co ntext":{"level":"warn","file":"db_block.cpp","line":592,"method":"_apply_transaction","hostname":"","thread_name":"th_a","timestamp":"2015-09-28T20:08:43"},"format":" ","data":{"trx":{"ref_block_num":13848,"ref_block_prefix":996090894,"expiration":"2015-09-28T20:08:54","operations":[[33,{"fee":{"amount":50000,"asset_id":"1.3.0"},"v esting_balance":"1.13.30","owner":"1.2.22404","amount":{"amount":"6400000000","asset_id":"1.3.0"}}]],"extensions":[],"signatures":["202c842047ea693db88068f8a5cb2e289d 372a8ab1226655695e7db2a2a427c7d41ff6887c12e5769aebc2e4e70953c17b95fd423b44ac03696cd413fd67c55aef"]}}},{"context":{"level":"warn","file":"db_block.cpp","line":495,"met hod":"_apply_block","hostname":"","thread_name":"th_a","timestamp":"2015-09-28T20:08:43"},"format":"","data":{"next_block.block_num()":275996}},{"context":{"level":"w arn","file":"db_block.cpp","line":195,"method":"_push_block","hostname":"","thread_name":"th_a","timestamp":"2015-09-28T20:08:43"},"format":"","data":{"new_block":{"p revious":"0004361c004527fceb33c4dfe9062a92a6421508","timestamp":"2015-09-28T20:08:42","witness":"1.6.5248","transaction_merkle_root":"00000000000000000000000000000000 00000000","extensions":[],"witness_signature":"1f051e73526e48fd46a4359daaede6e5e5a8b6f451194c24caad9a1287506ef0cd026c0ecdadb5ea35b9843dc0bd68faee59053278a7ce6e571f838 a2d870bebc9","transactions":[]}}},{"context":{"level":"warn","file":"application.cpp","line":428,"method":"handle_block","hostname":"","thread_name":"th_a","timestamp ":"2015-09-28T20:08:43"},"format":"","data":{"blk_msg":{"block":{"previous":"0004361c004527fceb33c4dfe9062a92a6421508","timestamp":"2015-09-28T20:08:42","witness":"1. 6.5248","transaction_merkle_root":"0000000000000000000000000000000000000000","extensions":[],"witness_signature":"1f051e73526e48fd46a4359daaede6e5e5a8b6f451194c24caad 9a1287506ef0cd026c0ecdadb5ea35b9843dc0bd68faee59053278a7ce6e571f838a2d870bebc9","transactions":[]},"block_id":"0004361d1c4d414512a8ae8e99ecf2026627d08e"},"sync_mode": true}}]} node.cpp:2959
2015-09-28T20:08:43 p2p:send_sync_block_to_node_delegate send_sync_block_to_n ] disconnecting client 104.155.223.175:32832 because it offered us the rejected block node.cpp:3073
....................
2015-09-28T20:08:56 p2p:send_sync_block_to_node_delegate send_sync_block_to_n ] Successfully pushed sync block 275997 (id:0004361d7bf5aecfd85eac9e45b1d1e84287a640) node.cpp:2935
...................
2015-09-28T20:09:09 p2p:message read_loop process_block_during ] Successfully pushed block 275998 (id:0004361e9fceb5c9825c54df7adb585bc4976602) node.cpp:3254
....................
2015-09-28T20:09:15 p2p:message read_loop process_block_during ] Successfully pushed block 275999 (id:0004361f189606417518259ba750f34aff126187) node.cpp:3254
.....................
.................
...............
b>2015-09-28T20:26:24 p2p:message read_loop process_block_during ] Successfully pushed block 276099 (id:000436831df20b58af1fda3d1439dd6c1423ec24) node.cpp:3254
...............
forked at block 275997.here is a piece from the log when it forked:Code: [Select]2015-09-28T20:08:30 p2p:message read_loop process_block_during ] received a block from peer 216.252.204.69:54183, passing it to client node.cpp:3232
forked at block 275997.
2015-09-28T20:08:30 p2p:message read_loop process_block_during ] Successfully pushed block 275994 (id:0004361a7b4321512a38bc112c8c8ecd84d3e59c) node.cpp:3254
....................
2015-09-28T20:08:36 p2p:message read_loop process_block_during ] received a block from peer 114.92.254.159:62015, passing it to client node.cpp:3232
.........................
2015-09-28T20:08:36 p2p:message read_loop process_block_during ] Failed to push block 275996 (id:0004361c004527fceb33c4dfe9062a92a6421508), client rejected block sent by peer node.cpp:3346
2015-09-28T20:08:36 p2p:message read_loop process_block_during ] disconnecting client 114.92.254.159:62015 because it offered us the rejected block node.cpp:3368
...............
2015-09-28T20:08:37 p2p:message read_loop process_block_during ] received a block from peer 185.42.242.124:40060, passing it to client node.cpp:3232
2015-09-28T20:08:37 p2p:message read_loop process_block_during ] Failed to push block 275996 (id:0004361c004527fceb33c4dfe9062a92a6421508), client rejected block sent by peer node.cpp:3346
2015-09-28T20:08:37 p2p:message read_loop process_block_during ] disconnecting client 185.42.242.124:40060 because it offered us the rejected block node.cpp:3368
..............
2015-09-28T20:08:39 p2p:message read_loop process_block_during ] received a block from peer 178.62.88.151:46944, passing it to client node.cpp:3232
2015-09-28T20:08:39 p2p:message read_loop process_block_during ] Successfully pushed block 275996 (id:0004361c4c4b8410260663693fd59b7ca1977b86) node.cpp:3254
2015-09-28T20:08:39 p2p:message read_loop process_block_during ] client validated the block, advertising it to other peers node.cpp:3278
..............
2015-09-28T20:08:42 p2p:message read_loop process_block_during ] received a block from peer 178.62.88.151:46944, passing it to client node.cpp:3232
2015-09-28T20:08:42 p2p:message read_loop process_block_during ] Peer 178.62.88.151:46944 sent me a block that didn't link to our blockchain. Restarting sync mode wi th them to get the missing block. Error pushing block was: {"code":90006,"name":"unlinkable_block_exception","message":"unlinkable block","stack":[{"context":{"level" :"error","file":"application.cpp","line":417,"method":"handle_block","hostname":"","thread_name":"th_a","timestamp":"2015-09-28T20:08:42"},"format":"Error when pushin g block:\n${e}","data":{"e":"3080000 unlinkable_block_exception: unlinkable block\nblock does not link to known chain\n {}\n th_a fork_database.cpp:79 _push_bl ock\n\n {\"new_block\":{\"previous\":\"0004361c004527fceb33c4dfe9062a92a6421508\",\"timestamp\":\"2015-09-28T20:08:42\",\"witness\":\"1.6.5248\",\"transaction_merk le_root\":\"0000000000000000000000000000000000000000\",\"extensions\":[],\"witness_signature\":\"1f051e73526e48fd46a4359daaede6e5e5a8b6f451194c24caad9a1287506ef0cd026 c0ecdadb5ea35b9843dc0bd68faee59053278a7ce6e571f838a2d870bebc9\",\"transactions\":[]}}\n th_a db_block.cpp:195 _push_block"}},{"context":{"level":"warn","file":"ap plication.cpp","line":428,"method":"handle_block","hostname":"","thread_name":"th_a","timestamp":"2015-09-28T20:08:42"},"format":"","data":{"blk_msg":{"block":{"previ ous":"0004361c004527fceb33c4dfe9062a92a6421508","timestamp":"2015-09-28T20:08:42","witness":"1.6.5248","transaction_merkle_root":"000000000000000000000000000000000000 0000","extensions":[],"witness_signature":"1f051e73526e48fd46a4359daaede6e5e5a8b6f451194c24caad9a1287506ef0cd026c0ecdadb5ea35b9843dc0bd68faee59053278a7ce6e571f838a2d8 70bebc9","transactions":[]},"block_id":"0004361d1c4d414512a8ae8e99ecf2026627d08e"},"sync_mode":false}}]} node.cpp:3362
d>2015-09-28T20:08:42 p2p:message read_loop fetch_next_batch_of_ ] sync: sending a request for the next items after 0004361c4c4b8410260663693fd59b7ca1977b86 to peer 178 .62.88.151:46944, (full request is ["0004361158e0898268252af7afa90b1d59ffa354","0004361707141c5119a53184c26912df003d6ac2","0004361a7b4321512a38bc112c8c8ecd84d3e59c"," 0004361c4c4b8410260663693fd59b7ca1977b86"]) node.cpp:2354
............................
2015-09-28T20:08:42 p2p:message read_loop on_blockchain_item_i ] popping item because delegate has already seen it. peer 178.62.88.151:46944's last block the delegat e has seen is now 0004361a7b4321512a38bc112c8c8ecd84d3e59c (actual block #275994) node.cpp:2511
2015-09-28T20:08:42 p2p:message read_loop on_blockchain_item_i ] popping item because delegate has already seen it. peer 178.62.88.151:46944's last block the delegat e has seen is now 0004361bd4f6fdf618a74290b09a8a018ab7867b (actual block #275995) node.cpp:2511
2015-09-28T20:08:42 p2p:message read_loop on_blockchain_item_i ] after removing all items we have already seen, item_hashes_received.size() = 2 node.cpp:2515
2015-09-28T20:08:42 p2p:message read_loop trigger_fetch_sync_i ] Triggering fetch sync items loop now node.cpp:1083
2015-09-28T20:08:42 p2p:fetch_sync_items_loop fetch_sync_items_loo ] beginning another iteration of the sync items loop node.cpp:1022
2015-09-28T20:08:42 p2p:fetch_sync_items_loop request_sync_items_f ] requesting 2 item(s) ["0004361c004527fceb33c4dfe9062a92a6421508","0004361d1c4d414512a8ae8e99ecf20 26627d08e"] from peer 178.62.88.151:46944 node.cpp:1006
2015-09-28T20:08:42 p2p:fetch_sync_items_loop fetch_sync_items_loo ] no sync items to fetch right now, going to sleep node.cpp:1072
2015-09-28T20:08:42 p2p:message read_loop on_message ] handling message fetch_blockchain_item_ids_message_type f9797929aee8529c183c2a43787da7a9b3d24e14 size 85 from peer 23.102.65.247:1984 node.cpp:1684
2015-09-28T20:08:42 p2p:message read_loop on_fetch_blockchain_ ] sync: received a request for item ids after 0004361c4c4b8410260663693fd59b7ca1977b86 from peer 23.102 .65.247:1984 (full request: ["0004361158e0898268252af7afa90b1d59ffa354","0004361707141c5119a53184c26912df003d6ac2","0004361a7b4321512a38bc112c8c8ecd84d3e59c","0004361 c4c4b8410260663693fd59b7ca1977b86"]) node.cpp:2171
2015-09-28T20:08:42 p2p:message read_loop on_fetch_blockchain_ ] reply_message: {"total_remaining_item_count":0,"item_type":1001,"item_hashes_available":["0004361c4c4 b8410260663693fd59b7ca1977b86"]} fetch_blockchain_item_ids_message_received.blockchain_synopsis: ["0004361158e0898268252af7afa90b1d59ffa354","0004361707141c5119a53184 c26912df003d6ac2","0004361a7b4321512a38bc112c8c8ecd84d3e59c","0004361c4c4b8410260663693fd59b7ca1977b86"] node.cpp:2194
2015-09-28T20:08:42 p2p:message read_loop on_fetch_blockchain_ ] sync: peer is already in sync with us node.cpp:2213
2015-09-28T20:08:43 p2p:message read_loop on_message ] handling message fetch_blockchain_item_ids_message_type f9797929aee8529c183c2a43787da7a9b3d24e14 size 85 from peer 207.46.141.218:1344 node.cpp:1684
2015-09-28T20:08:43 p2p:message read_loop on_fetch_blockchain_ ] sync: received a request for item ids after 0004361c4c4b8410260663693fd59b7ca1977b86 from peer 207.46 .141.218:1344 (full request: ["0004361158e0898268252af7afa90b1d59ffa354","0004361707141c5119a53184c26912df003d6ac2","0004361a7b4321512a38bc112c8c8ecd84d3e59c","000436 1c4c4b8410260663693fd59b7ca1977b86"]) node.cpp:2171
2015-09-28T20:08:43 p2p:message read_loop on_fetch_blockchain_ ] reply_message: {"total_remaining_item_count":0,"item_type":1001,"item_hashes_available":["0004361c4c4 b8410260663693fd59b7ca1977b86"]} fetch_blockchain_item_ids_message_received.blockchain_synopsis: ["0004361158e0898268252af7afa90b1d59ffa354","0004361707141c5119a53184 c26912df003d6ac2","0004361a7b4321512a38bc112c8c8ecd84d3e59c","0004361c4c4b8410260663693fd59b7ca1977b86"] node.cpp:2194
2015-09-28T20:08:43 p2p:message read_loop on_fetch_blockchain_ ] sync: peer is already in sync with us node.cpp:2213
2015-09-28T20:08:43 p2p:message read_loop on_message ] handling message block_message_type 1003c4eca3a31fd0e4933c919f37b072876911dc size 133 from peer 71.19 7.2.119:1776 node.cpp:1684
2015-09-28T20:08:43 p2p:message read_loop process_block_during ] received a block from peer 71.197.2.119:1776, passing it to client node.cpp:3232
2015-09-28T20:08:43 p2p:message read_loop process_block_during ] Peer 71.197.2.119:1776 sent me a block that didn't link to our blockchain. Restarting sync mode with them to get the missing block. Error pushing block was: {"code":90006,"name":"unlinkable_block_exception","message":"unlinkable block","stack":[{"context":{"level":" error","file":"application.cpp","line":417,"method":"handle_block","hostname":"","thread_name":"th_a","timestamp":"2015-09-28T20:08:43"},"format":"Error when pushing block:\n${e}","data":{"e":"3080000 unlinkable_block_exception: unlinkable block\nblock does not link to known chain\n {}\n th_a fork_database.cpp:79 _push_bloc k\n\n {\"new_block\":{\"previous\":\"0004361c004527fceb33c4dfe9062a92a6421508\",\"timestamp\":\"2015-09-28T20:08:42\",\"witness\":\"1.6.5248\",\"transaction_merkle _root\":\"0000000000000000000000000000000000000000\",\"extensions\":[],\"witness_signature\":\"1f051e73526e48fd46a4359daaede6e5e5a8b6f451194c24caad9a1287506ef0cd026c0 ecdadb5ea35b9843dc0bd68faee59053278a7ce6e571f838a2d870bebc9\",\"transactions\":[]}}\n th_a db_block.cpp:195 _push_block"}},{"context":{"level":"warn","file":"appl ication.cpp","line":428,"method":"handle_block","hostname":"","thread_name":"th_a","timestamp":"2015-09-28T20:08:43"},"format":"","data":{"blk_msg":{"block":{"previou s":"0004361c004527fceb33c4dfe9062a92a6421508","timestamp":"2015-09-28T20:08:42","witness":"1.6.5248","transaction_merkle_root":"00000000000000000000000000000000000000 00","extensions":[],"witness_signature":"1f051e73526e48fd46a4359daaede6e5e5a8b6f451194c24caad9a1287506ef0cd026c0ecdadb5ea35b9843dc0bd68faee59053278a7ce6e571f838a2d870 bebc9","transactions":[]},"block_id":"0004361d1c4d414512a8ae8e99ecf2026627d08e"},"sync_mode":false}}]} node.cpp:3362
2015-09-28T20:08:43 p2p:message read_loop fetch_next_batch_of_ ] sync: sending a request for the next items after 0004361c4c4b8410260663693fd59b7ca1977b86 to peer 71. 197.2.119:1776, (full request is ["0004361158e0898268252af7afa90b1d59ffa354","0004361707141c5119a53184c26912df003d6ac2","0004361a7b4321512a38bc112c8c8ecd84d3e59c","00 04361c4c4b8410260663693fd59b7ca1977b86"]) node.cpp:2354
..............
2015-09-28T20:08:43 p2p:send_sync_block_to_node_delegate send_sync_block_to_n ] Successfully pushed sync block 275996 (id:0004361c004527fceb33c4dfe9062a92a6421508) node.cpp:2935
..............
2015-09-28T20:08:43 p2p:send_sync_block_to_node_delegate fetch_next_batch_of_ ] sync: sending a request for the next items after 0004361c4c4b8410260663693fd59b7ca1977 b86 to peer 71.197.2.119:61371, (full request is ["0004361158e0898268252af7afa90b1d59ffa354","0004361707141c5119a53184c26912df003d6ac2","0004361a7b4321512a38bc112c8c8 ecd84d3e59c","0004361c4c4b8410260663693fd59b7ca1977b86"]) node.cpp:2354
...............
2015-09-28T20:08:43 p2p:message read_loop on_blockchain_item_i ] popping item because delegate has already seen it. peer 109.73.172.144:43494's last block the delega te has seen is now 0004361a7b4321512a38bc112c8c8ecd84d3e59c (actual block #275994) node.cpp:2511
2015-09-28T20:08:43 p2p:message read_loop on_blockchain_item_i ] popping item because delegate has already seen it. peer 109.73.172.144:43494's last block the delega te has seen is now 0004361bd4f6fdf618a74290b09a8a018ab7867b (actual block #275995) node.cpp:2511
2015-09-28T20:08:43 p2p:message read_loop on_blockchain_item_i ] popping item because delegate has already seen it. peer 109.73.172.144:43494's last block the delega te has seen is now 0004361c004527fceb33c4dfe9062a92a6421508 (actual block #275996) node.cpp:2511
...................
2015-09-28T20:08:43 p2p:message read_loop on_blockchain_item_i ] popping item because delegate has already seen it. peer 188.138.122.20:60328's last block the delega te has seen is now 0004361a7b4321512a38bc112c8c8ecd84d3e59c (actual block #275994) node.cpp:2511
2015-09-28T20:08:43 p2p:message read_loop on_blockchain_item_i ] popping item because delegate has already seen it. peer 188.165.233.53:1776's last block the delegat e has seen is now 0004361a7b4321512a38bc112c8c8ecd84d3e59c (actual block #275994) node.cpp:2511
2015-09-28T20:08:43 p2p:message read_loop on_blockchain_item_i ] popping item because delegate has already seen it. peer 188.138.122.20:60328's last block the delega te has seen is now 0004361bd4f6fdf618a74290b09a8a018ab7867b (actual block #275995) node.cpp:2511
2015-09-28T20:08:43 p2p:message read_loop on_blockchain_item_i ] popping item because delegate has already seen it. peer 188.165.233.53:1776's last block the delegat e has seen is now 0004361bd4f6fdf618a74290b09a8a018ab7867b (actual block #275995) node.cpp:2511
2015-09-28T20:08:43 p2p:message read_loop on_blockchain_item_i ] popping item because delegate has already seen it. peer 188.138.122.20:60328's last block the delega te has seen is now 0004361c004527fceb33c4dfe9062a92a6421508 (actual block #275996) node.cpp:2511
2015-09-28T20:08:43 p2p:message read_loop on_blockchain_item_i ] after removing all items we have already seen, item_hashes_received.size() = 1 node.cpp:2515
2015-09-28T20:08:43 p2p:message read_loop trigger_fetch_sync_i ] Triggering fetch sync items loop now node.cpp:1083
2015-09-28T20:08:43 p2p:fetch_sync_items_loop fetch_sync_items_loo ] beginning another iteration of the sync items loop node.cpp:1022
2015-09-28T20:08:43 p2p:fetch_sync_items_loop fetch_sync_items_loo ] no sync items to fetch right now, going to sleep node.cpp:1072
2015-09-28T20:08:43 p2p:message read_loop on_blockchain_item_i ] popping item because delegate has already seen it. peer 188.165.233.53:1776's last block the delegat e has seen is now 0004361c004527fceb33c4dfe9062a92a6421508 (actual block #275996) node.cpp:2511
.......................
2015-09-28T20:08:43 p2p:fetch_sync_items_loop fetch_sync_items_loo ] beginning another iteration of the sync items loop node.cpp:1022
2015-09-28T20:08:43 p2p:fetch_sync_items_loop fetch_sync_items_loo ] no sync items to fetch right now, going to sleep node.cpp:1072
f>2015-09-28T20:08:43 p2p:send_sync_block_to_node_delegate send_sync_block_to_n ] Failed to push sync block 275997 (id:0004361d1c4d414512a8ae8e99ecf2026627d08e): client rejected sync block sent by peer: {"code":10,"name":"assert_exception","message":"Assert Exception","stack":[{"context":{"level":"error","file":"vesting_balance_eval uator.cpp","line":103,"method":"do_evaluate","hostname":"","thread_name":"th_a","timestamp":"2015-09-28T20:08:43"},"format":"vbo.is_withdraw_allowed( now, op.amount ) : ","data":{"now":"2015-09-28T20:08:33","op":{"fee":{"amount":50000,"asset_id":"1.3.0"},"vesting_balance":"1.13.30","owner":"1.2.22404","amount":{"amount":"6400000000 ","asset_id":"1.3.0"}},"vbo":{"id":"1.13.30","owner":"1.2.22404","balance":{"amount":"6392481409","asset_id":"1.3.0"},"policy":[1,{"vesting_seconds":86400,"start_clai m":"1970-01-01T00:00:00","coin_seconds_earned":"552223993737600","coin_seconds_earned_last_update":"2015-09-28T20:07:51"}]}}},{"context":{"level":"warn","file":"vesti ng_balance_evaluator.cpp","line":109,"method":"do_evaluate","hostname":"","thread_name":"th_a","timestamp":"2015-09-28T20:08:43"},"format":"","data":{"op":{"fee":{"am ount":50000,"asset_id":"1.3.0"},"vesting_balance":"1.13.30","owner":"1.2.22404","amount":{"amount":"6400000000","asset_id":"1.3.0"}}}},{"context":{"level":"warn","fil e":"evaluator.cpp","line":42,"method":"start_evaluate","hostname":"","thread_name":"th_a","timestamp":"2015-09-28T20:08:43"},"format":"","data":{}},{"context":{"level ":"warn","file":"db_block.cpp","line":609,"method":"apply_operation","hostname":"","thread_name":"th_a","timestamp":"2015-09-28T20:08:43"},"format":"","data":{}},{"co ntext":{"level":"warn","file":"db_block.cpp","line":592,"method":"_apply_transaction","hostname":"","thread_name":"th_a","timestamp":"2015-09-28T20:08:43"},"format":" ","data":{"trx":{"ref_block_num":13848,"ref_block_prefix":996090894,"expiration":"2015-09-28T20:08:54","operations":[[33,{"fee":{"amount":50000,"asset_id":"1.3.0"},"v esting_balance":"1.13.30","owner":"1.2.22404","amount":{"amount":"6400000000","asset_id":"1.3.0"}}]],"extensions":[],"signatures":["202c842047ea693db88068f8a5cb2e289d 372a8ab1226655695e7db2a2a427c7d41ff6887c12e5769aebc2e4e70953c17b95fd423b44ac03696cd413fd67c55aef"]}}},{"context":{"level":"warn","file":"db_block.cpp","line":495,"met hod":"_apply_block","hostname":"","thread_name":"th_a","timestamp":"2015-09-28T20:08:43"},"format":"","data":{"next_block.block_num()":275996}},{"context":{"level":"w arn","file":"db_block.cpp","line":195,"method":"_push_block","hostname":"","thread_name":"th_a","timestamp":"2015-09-28T20:08:43"},"format":"","data":{"new_block":{"p revious":"0004361c004527fceb33c4dfe9062a92a6421508","timestamp":"2015-09-28T20:08:42","witness":"1.6.5248","transaction_merkle_root":"00000000000000000000000000000000 00000000","extensions":[],"witness_signature":"1f051e73526e48fd46a4359daaede6e5e5a8b6f451194c24caad9a1287506ef0cd026c0ecdadb5ea35b9843dc0bd68faee59053278a7ce6e571f838 a2d870bebc9","transactions":[]}}},{"context":{"level":"warn","file":"application.cpp","line":428,"method":"handle_block","hostname":"","thread_name":"th_a","timestamp ":"2015-09-28T20:08:43"},"format":"","data":{"blk_msg":{"block":{"previous":"0004361c004527fceb33c4dfe9062a92a6421508","timestamp":"2015-09-28T20:08:42","witness":"1. 6.5248","transaction_merkle_root":"0000000000000000000000000000000000000000","extensions":[],"witness_signature":"1f051e73526e48fd46a4359daaede6e5e5a8b6f451194c24caad 9a1287506ef0cd026c0ecdadb5ea35b9843dc0bd68faee59053278a7ce6e571f838a2d870bebc9","transactions":[]},"block_id":"0004361d1c4d414512a8ae8e99ecf2026627d08e"},"sync_mode": true}}]} node.cpp:2959
2015-09-28T20:08:43 p2p:send_sync_block_to_node_delegate send_sync_block_to_n ] disconnecting client 104.155.223.175:32832 because it offered us the rejected block node.cpp:3073
....................
2015-09-28T20:08:56 p2p:send_sync_block_to_node_delegate send_sync_block_to_n ] Successfully pushed sync block 275997 (id:0004361d7bf5aecfd85eac9e45b1d1e84287a640) node.cpp:2935
...................
2015-09-28T20:09:09 p2p:message read_loop process_block_during ] Successfully pushed block 275998 (id:0004361e9fceb5c9825c54df7adb585bc4976602) node.cpp:3254
....................
2015-09-28T20:09:15 p2p:message read_loop process_block_during ] Successfully pushed block 275999 (id:0004361f189606417518259ba750f34aff126187) node.cpp:3254
.....................
.................
...............
b>2015-09-28T20:26:24 p2p:message read_loop process_block_during ] Successfully pushed block 276099 (id:000436831df20b58af1fda3d1439dd6c1423ec24) node.cpp:3254
...............
from the log, it have received the correct block 275997:0004361d1c4d414512a8ae8e99ecf2026627d08e
but it denied it.
then switch to the wrong block: 0004361d7bf5aecfd85eac9e45b1d1e84287a640
I will flood the testnet at 9/29 0:10 UTC with 101 VPSs.
1410962ms th_a application.cpp:388 handle_block ] Got block #281678
with time 2015-09-29T01:23:27 from network with latency of 3975 ms from spartako
2015-09-28T07:02:09 p2p:message read_loop process_ordinary_mes ] client rejected message sent by peer 127.0.0.1:62015, {"code":10,"name":"assert_exception","message":"Assert Exception","stack":[{"context":{"level":"error","file":"db_block.cpp","line":534,"method":"_apply_transaction","hostname":"","thread_name":"th_a","timestamp":"2015-09-28T07:02:09"},
"format":"(skip & skip_transaction_dupe_check) || trx_idx.indices().get<by_trx_id>().find(trx_id) == trx_idx.indices().get<by_trx_id>().end(): ","data":{}},{"context":{"level":"warn","file":"db_block.cpp","line":592,"method":"_apply_transaction","hostname":"","thread_name":"th_a","timestamp":"2015-09-28T07:02:09"},"format":"","data":{"trx":{"ref_block_num":65467,"ref_block_prefix":2922805723,"expiration":"2015-09-28T07:02:48","operations":[[1,{"fee":{"amount":250000,"asset_id":"1.3.0"},"seller":"1.2.17357","amount_to_sell":{"amount":1,"asset_id":"1.3.664"},"min_to_receive":{"amount":100000,"asset_id":"1.3.0"},"expiration":"2015-09-28T07:03:43","fill_or_kill":false,"extensions":[]}]],"extensions":[],"signatures":["1f17bdb09411ca7eeaf6c7e28b81222ae240c68dddd71d4ce6a68049c38b9429420e5ab73e01ca705fa52ced97a1b723069eac4c8fb3683df09ae6e168d8ce5a87"]}}},{"context":{"level":"warn","file":"db_block.cpp","line":214,"method":"push_transaction","hostname":"","thread_name":"th_a","timestamp":"2015-09-28T07:02:09"},"format":"","data":{"trx":{"ref_block_num":65467,"ref_block_prefix":2922805723,"expiration":"2015-09-28T07:02:48","operations":[[1,{"fee":{"amount":250000,"asset_id":"1.3.0"},"seller":"1.2.17357","amount_to_sell":{"amount":1,"asset_id":"1.3.664"},"min_to_receive":{"amount":100000,"asset_id":"1.3.0"},"expiration":"2015-09-28T07:03:43","fill_or_kill":false,"extensions":[]}]],"extensions":[],"signatures":["1f17bdb09411ca7eeaf6c7e28b81222ae240c68dddd71d4ce6a68049c38b9429420e5ab73e01ca705fa52ced97a1b723069eac4c8fb3683df09ae6e168d8ce5a87"]}}},{"context":{"level":"warn","file":"application.cpp","line":434,"method":"handle_transaction","hostname":"","thread_name":"th_a","timestamp":"2015-09-28T07:02:09"},"format":"","data":{"transaction_message":{"trx":{"ref_block_num":65467,"ref_block_prefix":2922805723,"expiration":"2015-09-28T07:02:48","operations":[[1,{"fee":{"amount":250000,"asset_id":"1.3.0"},"seller":"1.2.17357","amount_to_sell":{"amount":1,"asset_id":"1.3.664"},"min_to_receive":{"amount":100000,"asset_id":"1.3.0"},"expiration":"2015-09-28T07:03:43","fill_or_kill":false,"extensions":[]}]],"extensions":[],"signatures":["1f17bdb09411ca7eeaf6c7e28b81222ae240c68dddd71d4ce6a68049c38b9429420e5ab73e01ca705fa52ced97a1b723069eac4c8fb3683df09ae6e168d8ce5a87"]}}}}]} node.cpp:3759
2015-09-28T07:03:13 p2p:message read_loop process_ordinary_mes ] client rejected message sent by peer 104.236.11.171:48991, {"code":10,"name":"assert_exception","message":"Assert Exception","stack":[{"context":{"level":"error","file":"db_block.cpp","line":562,"method":"_apply_transaction","hostname":"","thread_name":"th_a","timestamp":"2015-09-28T07:03:13"},
"format":"now <= trx.expiration: ","data":{"now":"2015-09-28T07:03:09","trx.exp":"2015-09-28T07:03:00"}},{"context":{"level":"warn","file":"db_block.cpp","line":592,"method":"_apply_transaction","hostname":"","thread_name":"th_a","timestamp":"2015-09-28T07:03:13"},"format":"","data":{"trx":{"ref_block_num":65475,"ref_block_prefix":3967048843,"expiration":"2015-09-28T07:03:00","operations":[[1,{"fee":{"amount":250000,"asset_id":"1.3.0"},"seller":"1.2.17357","amount_to_sell":{"amount":1,"asset_id":"1.3.664"},"min_to_receive":{"amount":100000,"asset_id":"1.3.0"},"expiration":"2015-09-28T07:04:19","fill_or_kill":false,"extensions":[]}]],"extensions":[],"signatures":["1f51cc04c73f2236caa7da4ce80eb8134b5065c632fae74f8f16e839a819f511822a5d375a3457be4fe8028686b6766bcd7c3d77fce19ef35c609c0a15f80fb76f"]}}},{"context":{"level":"warn","file":"db_block.cpp","line":214,"method":"push_transaction","hostname":"","thread_name":"th_a","timestamp":"2015-09-28T07:03:13"},"format":"","data":{"trx":{"ref_block_num":65475,"ref_block_prefix":3967048843,"expiration":"2015-09-28T07:03:00","operations":[[1,{"fee":{"amount":250000,"asset_id":"1.3.0"},"seller":"1.2.17357","amount_to_sell":{"amount":1,"asset_id":"1.3.664"},"min_to_receive":{"amount":100000,"asset_id":"1.3.0"},"expiration":"2015-09-28T07:04:19","fill_or_kill":false,"extensions":[]}]],"extensions":[],"signatures":["1f51cc04c73f2236caa7da4ce80eb8134b5065c632fae74f8f16e839a819f511822a5d375a3457be4fe8028686b6766bcd7c3d77fce19ef35c609c0a15f80fb76f"]}}},{"context":{"level":"warn","file":"application.cpp","line":434,"method":"handle_transaction","hostname":"","thread_name":"th_a","timestamp":"2015-09-28T07:03:13"},"format":"","data":{"transaction_message":{"trx":{"ref_block_num":65475,"ref_block_prefix":3967048843,"expiration":"2015-09-28T07:03:00","operations":[[1,{"fee":{"amount":250000,"asset_id":"1.3.0"},"seller":"1.2.17357","amount_to_sell":{"amount":1,"asset_id":"1.3.664"},"min_to_receive":{"amount":100000,"asset_id":"1.3.0"},"expiration":"2015-09-28T07:04:19","fill_or_kill":false,"extensions":[]}]],"extensions":[],"signatures":["1f51cc04c73f2236caa7da4ce80eb8134b5065c632fae74f8f16e839a819f511822a5d375a3457be4fe8028686b6766bcd7c3d77fce19ef35c609c0a15f80fb76f"]}}}}]} node.cpp:3759
2015-09-28T07:04:23 p2p:message read_loop process_ordinary_mes ] client rejected message sent by peer 127.0.0.1:62015, {"code":10,"name":"assert_exception","message":"Assert Exception","stack":[{"context":{"level":"error","file":"db_block.cpp","line":555,"method":"_apply_transaction","hostname":"","thread_name":"th_a","timestamp":"2015-09-28T07:04:23"},
"format":"trx.ref_block_prefix == tapos_block_summary.block_id._hash[1]: ","data":{}},{"context":{"level":"warn","file":"db_block.cpp","line":592,"method":"_apply_transaction","hostname":"","thread_name":"th_a","timestamp":"2015-09-28T07:04:23"},"format":"","data":{"trx":{"ref_block_num":65498,"ref_block_prefix":3583879970,"expiration":"2015-09-28T07:04:43","operations":[[1,{"fee":{"amount":250000,"asset_id":"1.3.0"},"seller":"1.2.17357","amount_to_sell":{"amount":1,"asset_id":"1.3.664"},"min_to_receive":{"amount":100000,"asset_id":"1.3.0"},"expiration":"2015-09-28T07:05:58","fill_or_kill":false,"extensions":[]}]],"extensions":[],"signatures":["20332b3cbc6e337b00ae7b5ebc2e4cf2903fdbd4046558fb51496aee33e45ece6032386c5b3979c85ff1035e6561bb823198fb47f52e2dec224ba48452dd572c53"]}}},{"context":{"level":"warn","file":"db_block.cpp","line":214,"method":"push_transaction","hostname":"","thread_name":"th_a","timestamp":"2015-09-28T07:04:23"},"format":"","data":{"trx":{"ref_block_num":65498,"ref_block_prefix":3583879970,"expiration":"2015-09-28T07:04:43","operations":[[1,{"fee":{"amount":250000,"asset_id":"1.3.0"},"seller":"1.2.17357","amount_to_sell":{"amount":1,"asset_id":"1.3.664"},"min_to_receive":{"amount":100000,"asset_id":"1.3.0"},"expiration":"2015-09-28T07:05:58","fill_or_kill":false,"extensions":[]}]],"extensions":[],"signatures":["20332b3cbc6e337b00ae7b5ebc2e4cf2903fdbd4046558fb51496aee33e45ece6032386c5b3979c85ff1035e6561bb823198fb47f52e2dec224ba48452dd572c53"]}}},{"context":{"level":"warn","file":"application.cpp","line":434,"method":"handle_transaction","hostname":"","thread_name":"th_a","timestamp":"2015-09-28T07:04:23"},"format":"","data":{"transaction_message":{"trx":{"ref_block_num":65498,"ref_block_prefix":3583879970,"expiration":"2015-09-28T07:04:43","operations":[[1,{"fee":{"amount":250000,"asset_id":"1.3.0"},"seller":"1.2.17357","amount_to_sell":{"amount":1,"asset_id":"1.3.664"},"min_to_receive":{"amount":100000,"asset_id":"1.3.0"},"expiration":"2015-09-28T07:05:58","fill_or_kill":false,"extensions":[]}]],"extensions":[],"signatures":["20332b3cbc6e337b00ae7b5ebc2e4cf2903fdbd4046558fb51496aee33e45ece6032386c5b3979c85ff1035e6561bb823198fb47f52e2dec224ba48452dd572c53"]}}}}]} node.cpp:3759
2015-09-28T07:22:20 p2p:send_sync_block_to_node_delegate send_sync_block_to_n ] Failed to push sync block 262132 (id:0003fff4f020495812404736ce0e866d4801c01b): client rejected sync block sent by peer: {"code":10,"name":"assert_exception","message":"Assert Exception","stack":[{"context":{"level":"error","file":"db_block.cpp","line":613,"method":"validate_block_header","hostname":"","thread_name":"th_a","timestamp":"2015-09-28T07:22:20"},
"format":"head_block_id() == next_block.previous: ","data":{"head_block_id":"0003fff3004e3e0a5eed50770023bf74e4c246f1","next.prev":"0003fff36a9b0c61dd3922f19fe1f4a040ff1ae2"}},{"context":{"level":"warn","file":"db_block.cpp","line":495,"method":"_apply_block","hostname":"","thread_name":"th_a","timestamp":"2015-09-28T07:22:20"},"format":"","data":{"next_block.block_num()":262132}},{"context":{"level":"warn","file":"db_block.cpp","line":195,"method":"_push_block","hostname":"","thread_name":"th_a","timestamp":"2015-09-28T07:22:20"},"format":"","data":{"new_block":{"previous":"0003fff36a9b0c61dd3922f19fe1f4a040ff1ae2","timestamp":"2015-09-28T07:06:12","witness":"1.6.2","transaction_merkle_root":"93a42a8d8c1db93d55ce90defea956544834b5e0","extensions":[],"witness_signature":"20045fae9169b63f0d2f764cfbacf9be2ab7d7044192158b6cf9eca676079b3e302c0f04fa6dd99913c3ef53f7f8423d9809932d3457747dd96fcf920cc920ebbf","transactions":[.....]}
During the spam, I can observe a high latencyMy block producing node is dead.Code: [Select]1410962ms th_a application.cpp:388 handle_block ] Got block #281678
with time 2015-09-29T01:23:27 from network with latency of 3975 ms from spartako
forked, have to resync.Make a backup of the blockchain directory when you're in sync.
can we add the block number for parameter --resync-blockchain?
so we only need to resync from the given block number.
I will flood the testnet at 9/29 0:10 UTC with 101 VPSs. Tips for VPS costs are more than welcome. ;)
BTS ID: clayop
Due to the limit of numbers per region, I can only manage about 60 VPSs.
Sent you 500BTS for to help with your costs ;D
I will flood the testnet at 9/29 0:10 UTC with 101 VPSs. Tips for VPS costs are more than welcome. ;)
BTS ID: clayop
Due to the limit of numbers per region, I can only manage about 60 VPSs.
Sent you 500BTS for to help with your costs ;D
Matching funds sent.
I will match up to 50,000 BTS thru 23:59 UTC 29 SEP 2015 to support Clayop with his VPS needs. Please do not spam this thread.
Please PM me your Transaction ID and I will match it.
forked, have to resync.Make a backup of the blockchain directory when you're in sync.
can we add the block number for parameter --resync-blockchain?
so we only need to resync from the given block number.
After forked, restore with the backup.
It took 75 seconds to replay the 280k-blocks chain.
It took 75 seconds to replay the 280k-blocks chain.
Maybe on a SLOW vps, but on my machine it takes a mere118.79 seconds.
@clayop just sent u some bts for VPS donationThanks my friend!
Np. Thanks for helping. I'll send u some more in next day or two@clayop just sent u some bts for VPS donationThanks my friend!
propose_parameter_change init2 "2015-09-29T19:45:00" { "maintenance_skip_slots": 4, "maintenance_interval" : 1800, "maximum_transaction_size": 65536, "cashback_vesting_period_seconds": 7776000, "witness_pay_per_block": 500000, "committee_proposal_review_period" : 300 } true
approve_proposal init2 1.10.4 {"active_approvals_to_add" : ["init2", "init3", "init4", "init5", "init6", "init7", "init8", "init9", "init10"]} true
get_block 299579
get_block 299596
riverhead@dedi3890:~/github/graphene/programs/witness_node$ ./witness_node --rpc-endpoint "127.0.0.1:8090" -d test_net_3 -s "104.236.118.105:1776" --genesis-json sep-18-testnet-genesis.json
3284321ms th_a witness.cpp:83 plugin_initialize ] witness plugin: plugin_initialize() begin
3284321ms th_a witness.cpp:93 plugin_initialize ] key_id_to_wif_pair: ["G-------","5-------"]
3284321ms th_a witness.cpp:111 plugin_initialize ] witness plugin: plugin_initialize() end
3284321ms th_a db_management.cpp:131 open ] Old database version detected, reindex is required
3284321ms th_a db_management.cpp:98 wipe ] Wiping database
3284328ms th_a object_database.cpp:81 wipe ] Wiping object_database.
3284375ms th_a application.cpp:242 operator() ] Initializing database...
3320089ms th_a db_management.cpp:147 open ] last_block->id(): 0004a4f61b6d4898f6311bae5fb8a960ffc0a3de last_block->block_num(): 304374
3320120ms th_a thread.cpp:95 thread ] name:ntp tid:140230048286464
3320122ms th_a thread.cpp:95 thread ] name:p2p tid:140230029403904
3320127ms th_a application.cpp:122 reset_p2p_node ] Adding seed node 104.236.118.105:1776
3320128ms th_a application.cpp:134 reset_p2p_node ] Configured p2p node to listen on 0.0.0.0:51599
3320129ms ntp ntp.cpp:177 read_loop ] ntp_delta_time updated to 646 us
3320130ms th_a application.cpp:184 reset_websocket_serv ] Configured websocket rpc to listen on 127.0.0.1:8090
3320130ms th_a witness.cpp:116 plugin_startup ] witness plugin: plugin_startup() begin
3320130ms th_a witness.cpp:123 plugin_startup ] Launching block production for 1 witnesses.
3320130ms th_a witness.cpp:134 plugin_startup ] witness plugin: plugin_startup() end
3320130ms th_a main.cpp:167 main ] Started witness node on a chain with 0 blocks.
3320130ms th_a main.cpp:168 main ] Chain ID is 0f8b631d7a9dfebf16d6776fab96b629a14429762bf9c3eb95db1e4e4af637a4
3321000ms th_a witness.cpp:179 block_production_loo ] Not producing block because production is disabled until we receive a recent block (see: --enable-stale-production)
witness_node: /home/riverhead/github/graphene/libraries/chain/block_database.cpp:128: graphene::chain::block_id_type graphene::chain::block_database::fetch_block_id(uint32_t) const: Assertion `block_num != 0' failed.
Aborted
riverhead@dedi3890:~/github/graphene/programs/witness_node$
First try crashed. Second is replaying blockchain but seems OK.Code: [Select]riverhead@dedi3890:~/github/graphene/programs/witness_node$ ./witness_node --rpc-endpoint "127.0.0.1:8090" -d test_net_3 -s "104.236.118.105:1776" --genesis-json sep-18-testnet-genesis.json
3284321ms th_a witness.cpp:83 plugin_initialize ] witness plugin: plugin_initialize() begin
3284321ms th_a witness.cpp:93 plugin_initialize ] key_id_to_wif_pair: ["G-------","5-------"]
3284321ms th_a witness.cpp:111 plugin_initialize ] witness plugin: plugin_initialize() end
3284321ms th_a db_management.cpp:131 open ] Old database version detected, reindex is required
3284321ms th_a db_management.cpp:98 wipe ] Wiping database
3284328ms th_a object_database.cpp:81 wipe ] Wiping object_database.
3284375ms th_a application.cpp:242 operator() ] Initializing database...
3320089ms th_a db_management.cpp:147 open ] last_block->id(): 0004a4f61b6d4898f6311bae5fb8a960ffc0a3de last_block->block_num(): 304374
3320120ms th_a thread.cpp:95 thread ] name:ntp tid:140230048286464
3320122ms th_a thread.cpp:95 thread ] name:p2p tid:140230029403904
3320127ms th_a application.cpp:122 reset_p2p_node ] Adding seed node 104.236.118.105:1776
3320128ms th_a application.cpp:134 reset_p2p_node ] Configured p2p node to listen on 0.0.0.0:51599
3320129ms ntp ntp.cpp:177 read_loop ] ntp_delta_time updated to 646 us
3320130ms th_a application.cpp:184 reset_websocket_serv ] Configured websocket rpc to listen on 127.0.0.1:8090
3320130ms th_a witness.cpp:116 plugin_startup ] witness plugin: plugin_startup() begin
3320130ms th_a witness.cpp:123 plugin_startup ] Launching block production for 1 witnesses.
3320130ms th_a witness.cpp:134 plugin_startup ] witness plugin: plugin_startup() end
3320130ms th_a main.cpp:167 main ] Started witness node on a chain with 0 blocks.
3320130ms th_a main.cpp:168 main ] Chain ID is 0f8b631d7a9dfebf16d6776fab96b629a14429762bf9c3eb95db1e4e4af637a4
3321000ms th_a witness.cpp:179 block_production_loo ] Not producing block because production is disabled until we receive a recent block (see: --enable-stale-production)
witness_node: /home/riverhead/github/graphene/libraries/chain/block_database.cpp:128: graphene::chain::block_id_type graphene::chain::block_database::fetch_block_id(uint32_t) const: Assertion `block_num != 0' failed.
Aborted
riverhead@dedi3890:~/github/graphene/programs/witness_node$
Blocks are producing again. Updated.
(gdb) bt
#0 0x00007ffff6b88cc9 in __GI_raise (sig=sig@entry=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:56
#1 0x00007ffff6b8c0d8 in __GI_abort () at abort.c:89
#2 0x00007ffff6b81b86 in __assert_fail_base (fmt=0x7ffff6cd2830 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n", assertion=assertion@entry=0x2cf35f4 "block_num != 0",
file=file@entry=0x2cf3518 "/home/alt/workspace/dac/graphene/libraries/chain/block_database.cpp", line=line@entry=128,
function=function@entry=0x2cf4e00 <graphene::chain::block_database::fetch_block_id(unsigned int) const::__PRETTY_FUNCTION__> "graphene::chain::block_id_type graphene::chain::block_database::fetch_block_id(uint32_t) const") at assert.c:92
#3 0x00007ffff6b81c32 in __GI___assert_fail (assertion=0x2cf35f4 "block_num != 0",
file=0x2cf3518 "/home/alt/workspace/dac/graphene/libraries/chain/block_database.cpp", line=128,
function=0x2cf4e00 <graphene::chain::block_database::fetch_block_id(unsigned int) const::__PRETTY_FUNCTION__> "graphene::chain::block_id_type graphene::chain::block_database::fetch_block_id(uint32_t) const") at assert.c:101
#4 0x00000000026eeaa2 in graphene::chain::block_database::fetch_block_id (this=0x34fe120, block_num=0)
at /home/alt/workspace/dac/graphene/libraries/chain/block_database.cpp:128
#5 0x00000000024356dd in graphene::chain::database::get_block_id_for_num (this=0x34fde60, block_num=0)
at /home/alt/workspace/dac/graphene/libraries/chain/db_block.cpp:50
#6 0x00000000020b5798 in graphene::app::detail::application_impl::get_blockchain_synopsis (this=0x34fdd30, reference_point=...,
number_of_blocks_after_reference_point=0) at /home/alt/workspace/dac/graphene/libraries/app/application.cpp:692
#7 0x00000000029754fb in graphene::net::detail::statistics_gathering_node_delegate_wrapper::<lambda()>::operator()(void) const (__closure=0x7fffdc0660a8)
at /home/alt/workspace/dac/graphene/libraries/net/node.cpp:5394
#8 0x0000000002986118 in fc::detail::functor_run<graphene::net::detail::statistics_gathering_node_delegate_wrapper::get_blockchain_synopsis(const item_hash_t&, uint32_t)::<lambda()> >::run(void *, void *) (functor=0x7fffdc0660a8, prom=0x7fffdc066190) at /home/alt/workspace/dac/graphene/libraries/fc/include/fc/thread/task.hpp:77
#9 0x000000000274ae45 in fc::task_base::run_impl (this=0x7fffdc0660c8) at /home/alt/workspace/dac/graphene/libraries/fc/src/thread/task.cpp:43
#10 0x000000000274add6 in fc::task_base::run (this=0x7fffdc0660c8) at /home/alt/workspace/dac/graphene/libraries/fc/src/thread/task.cpp:32
#11 0x000000000273ec50 in fc::thread_d::run_next_task (this=0x3538ff0) at /home/alt/workspace/dac/graphene/libraries/fc/src/thread/thread_d.hpp:498
#12 0x000000000273f0f4 in fc::thread_d::process_tasks (this=0x3538ff0) at /home/alt/workspace/dac/graphene/libraries/fc/src/thread/thread_d.hpp:547
#13 0x000000000273e765 in fc::thread_d::start_process_tasks (my=55807984) at /home/alt/workspace/dac/graphene/libraries/fc/src/thread/thread_d.hpp:475
#14 0x0000000002aa7351 in make_fcontext ()
#15 0x0000000000000000 in ?? ()
Special thanks to the five (5) community members taking part in the fund matching offer to support Clayop's efforts with spamming the network from a global VPS entourage. These individuals combined to contribute 9500 BTS. I kicked in an extra 500 BTS extra, making it a 5-digit match.
Again, thanks to the community and Clayop.
Best,
Fox
93.3428% 284000 of 304255
1818515ms th_a undo_database.hpp:62 ~session ] 10 assert_exception: Assert Exception
!_disabled:
{}
th_a undo_database.cpp:88 undo
{}
th_a undo_database.cpp:116 undo
terminate called after throwing an instance of 'fc::assert_exception'
Aborted (core dumped)
witness_node: /home/spartako/graphene/libraries/chain/db_market.cpp:290: bool graphene::chain::database::fill_order(const graphene::chain::call_order_object&, const graphene::chain::asset&, const graphene::chain::asset&): Assertion `order.get_collateral() >= pays' failed.
Aborted (core dumped)
witness_node: /home/calabiyau/graphene/libraries/chain/db_market.cpp:290: bool graphene::chain::database::fill_order(const graphene::chain::call_order_object&, const graphene::chain::asset&, const graphene::chain::asset&): Assertion `order.get_collateral() >= pays' failed.
Aborted (core dumped)
witness_node: /home/calabiyau/graphene/libraries/chain/db_market.cpp:290: bool graphene::chain::database::fill_order(const graphene::chain::call_order_object&, const graphene::chain::asset&, const graphene::chain::asset&): Assertion `order.get_collateral() >= pays' failed.
Aborted (core dumped)
witness_node: /home/riverhead/github/graphene/libraries/chain/db_market.cpp:290: bool graphene::chain::database::fill_order(const graphene::chain::call_order_object&, const graphene::chain::asset&, const graphene::chain::asset&): Assertion `order.get_collateral() >= pays' failed.
Aborted
1294000ms th_a witness.cpp:182 block_production_loo ] Not producing block because it isn't my turn
1295000ms th_a witness.cpp:182 block_production_loo ] Not producing block because it isn't my turn
witness_node: /home/admin/.BitShares2_build/libraries/chain/db_market.cpp:290: bool graphene::chain::database::fill_order(const graphene::chain::call_order_object&, const graphene::chain::asset&, const graphene::chain::asset&): Assertion `order.get_collateral() >= pays' failed.
Program received signal SIGABRT, Aborted.
0x00007ffff6516107 in raise () from /lib/x86_64-linux-gnu/libc.so.6
(gdb) bt
#0 0x00007ffff6516107 in raise () from /lib/x86_64-linux-gnu/libc.so.6
#1 0x00007ffff65174e8 in abort () from /lib/x86_64-linux-gnu/libc.so.6
#2 0x00007ffff650f226 in ?? () from /lib/x86_64-linux-gnu/libc.so.6
#3 0x00007ffff650f2d2 in __assert_fail () from /lib/x86_64-linux-gnu/libc.so.6
#4 0x00000000024ec8a3 in graphene::chain::database::fill_order (this=0x360f160, order=..., pays=..., receives=...)
at /home/admin/.BitShares2_build/libraries/chain/db_market.cpp:290
#5 0x00000000024ec173 in graphene::chain::database::match (this=0x360f160, call=..., settle=..., match_price=..., max_settlement=...)
at /home/admin/.BitShares2_build/libraries/chain/db_market.cpp:239
#6 0x00000000024f1f82 in graphene::chain::database::clear_expired_orders (this=0x360f160) at /home/admin/.BitShares2_build/libraries/chain/db_update.cpp:237
#7 0x00000000024d4b90 in graphene::chain::database::_apply_block (this=0x360f160, next_block=...)
at /home/admin/.BitShares2_build/libraries/chain/db_block.cpp:478
#8 0x00000000024d4088 in graphene::chain::database::<lambda()>::operator()(void) const (__closure=0x7fffe7df1330)
at /home/admin/.BitShares2_build/libraries/chain/db_block.cpp:436
#9 0x00000000024f2f55 in graphene::chain::detail::with_skip_flags<graphene::chain::database::apply_block(const graphene::chain::signed_block&, uint32_t)::<lambda()> >(graphene::chain::database &, uint32_t, graphene::chain::database::<lambda()>) (db=..., skip_flags=0, callback=...)
at /home/admin/.BitShares2_build/libraries/chain/include/graphene/chain/db_with.hpp:123
#10 0x00000000024d44ad in graphene::chain::database::apply_block (this=0x360f160, next_block=..., skip=0)
at /home/admin/.BitShares2_build/libraries/chain/db_block.cpp:437
#11 0x00000000024ce648 in graphene::chain::database::_push_block (this=0x360f160, new_block=...)
at /home/admin/.BitShares2_build/libraries/chain/db_block.cpp:185
#12 0x00000000024cdc21 in graphene::chain::database::<lambda()>::<lambda()>::operator()(void) const (__closure=0x7fffe7df1e20)
at /home/admin/.BitShares2_build/libraries/chain/db_block.cpp:112
#13 0x00000000024f2d5e in graphene::chain::detail::without_pending_transactions<graphene::chain::database::push_block(const graphene::chain::signed_block&, uint32_t)::<lambda()>::<lambda()> >(graphene::chain::database &, <unknown type in /home/admin/.BitShares2_bin/witness_node_2015-09-24_test3c-1-gd01fc0a, CU 0xfa3feb, DIE 0x11f4221>, graphene::chain::database::<lambda()>::<lambda()>) (db=...,
pending_transactions=<unknown type in /home/admin/.BitShares2_bin/witness_node_2015-09-24_test3c-1-gd01fc0a, CU 0xfa3feb, DIE 0x11f4221>, callback=...)
at /home/admin/.BitShares2_build/libraries/chain/include/graphene/chain/db_with.hpp:140
#14 0x00000000024cdc92 in graphene::chain::database::<lambda()>::operator()(void) const (__closure=0x7fffe7df1ed0)
at /home/admin/.BitShares2_build/libraries/chain/db_block.cpp:113
#15 0x00000000024f2dd9 in graphene::chain::detail::with_skip_flags<graphene::chain::database::push_block(const graphene::chain::signed_block&, uint32_t)::<lambda()> >(graphene::chain::database &, uint32_t, graphene::chain::database::<lambda()>) (db=..., skip_flags=0, callback=...)
at /home/admin/.BitShares2_build/libraries/chain/include/graphene/chain/db_with.hpp:123
#16 0x00000000024cdce3 in graphene::chain::database::push_block (this=0x360f160, new_block=..., skip=0)
at /home/admin/.BitShares2_build/libraries/chain/db_block.cpp:114
---Type <return> to continue, or q <return> to quit---
#17 0x00000000024d158b in graphene::chain::database::_generate_block (this=0x360f160, when=..., witness_id=..., block_signing_private_key=..., retry_on_failure=true) at /home/admin/.BitShares2_build/libraries/chain/db_block.cpp:368
#18 0x00000000024d07d3 in graphene::chain::database::<lambda()>::operator()(void) const (__closure=0x7fffe7df2c40) at /home/admin/.BitShares2_build/libraries/chain/db_block.cpp:276
#19 0x00000000024f2ecf in graphene::chain::detail::with_skip_flags<graphene::chain::database::generate_block(fc::time_point_sec, graphene::chain::witness_id_type, const fc::ecc::private_key&, uint32_t)::<lambda()> >(graphene::chain::database &, uint32_t, graphene::chain::database::<lambda()>) (db=..., skip_flags=0, callback=...) at /home/admin/.BitShares2_build/libraries/chain/include/graphene/chain/db_with.hpp:123
#20 0x00000000024d089f in graphene::chain::database::generate_block (this=0x360f160, when=..., witness_id=..., block_signing_private_key=..., skip=0) at /home/admin/.BitShares2_build/libraries/chain/db_block.cpp:277
#21 0x00000000024c218f in graphene::witness_plugin::witness_plugin::maybe_produce_block (this=0x3618410, capture=...) at /home/admin/.BitShares2_build/libraries/plugins/witness/witness.cpp:276
#22 0x00000000024bff51 in graphene::witness_plugin::witness_plugin::block_production_loop (this=0x3618410) at /home/admin/.BitShares2_build/libraries/plugins/witness/witness.cpp:160
#23 0x00000000024bfe19 in graphene::witness_plugin::witness_plugin::<lambda()>::operator()(void) const (__closure=0x1639d188) at /home/admin/.BitShares2_build/libraries/plugins/witness/witness.cpp:150
#24 0x00000000024c3288 in fc::detail::void_functor_run<graphene::witness_plugin::witness_plugin::schedule_production_loop()::<lambda()> >::run(void *, void *) (functor=0x1639d188, prom=0x1639d180)
at /home/admin/.BitShares2_build/libraries/fc/include/fc/thread/task.hpp:83
#25 0x00000000027e209d in fc::task_base::run_impl (this=0x1639d190) at /home/admin/.BitShares2_build/libraries/fc/src/thread/task.cpp:43
#26 0x00000000027e202c in fc::task_base::run (this=0x1639d190) at /home/admin/.BitShares2_build/libraries/fc/src/thread/task.cpp:32
#27 0x00000000027d608a in fc::thread_d::run_next_task (this=0x364d2f0) at /home/admin/.BitShares2_build/libraries/fc/src/thread/thread_d.hpp:498
#28 0x00000000027d6554 in fc::thread_d::process_tasks (this=0x364d2f0) at /home/admin/.BitShares2_build/libraries/fc/src/thread/thread_d.hpp:547
#29 0x00000000027d5b7e in fc::thread_d::start_process_tasks (my=56939248) at /home/admin/.BitShares2_build/libraries/fc/src/thread/thread_d.hpp:475
#30 0x0000000002b50711 in make_fcontext ()
#31 0x00010102464c457f in ?? ()
#32 0x0000000000000000 in ?? ()
(gdb)
{<graphene::chain::block_header> = {previous = {_hash = {3201106944, 1397224070, 591270816, 3178160838, 2663318844}}, timestamp = {utc_seconds = 1443601296} [...]
1244000ms th_a witness.cpp:182 block_production_loo ] Not producing block because it isn't my turn
witness_node: /home/ihashfury/tmp/graphene/libraries/chain/db_market.cpp:290: bool graphene::chain::database::fill_order(const graphene:
:chain::call_order_object&, const graphene::chain::asset&, const graphene::chain::asset&): Assertion `order.get_collateral() >= pays' fa
iled.
Aborted (core dumped)
1272262ms th_a application.cpp:388 handle_block ] Got block #314815 with time 2015-09-30T08:21:12 from network with latency of 263 ms from init0
witness_node: /home/ihashfury/tmp/graphene/libraries/chain/db_market.cpp:290: bool graphene::chain::database::fill_order(const graphene::chain::call_order_object&, const graphene::chain::asset&, const graphene::chain::asset&): Assertion `order.get_collateral() >= pays' failed.
Aborted
I wake up this morning and my witness for all night was not able to sync the correct fork.
I have a while [ True ] in my bash script so if it crash, it start again but every times gives me this error and started again to try to resync (without success):Code: [Select]93.3428% 284000 of 304255
1818515ms th_a undo_database.hpp:62 ~session ] 10 assert_exception: Assert Exception
!_disabled:
{}
th_a undo_database.cpp:88 undo
{}
th_a undo_database.cpp:116 undo
terminate called after throwing an instance of 'fc::assert_exception'
Aborted (core dumped)
Now the witness if updated to the new master.
All of my updated nodes also crashed. I am now unable to --resync-blockchain. I only have a single seed node listed in my startup script, but it is not responding.
Please post your peers so that I may sync.
"head_block_num": 0,
"head_block_id": "0000000000000000000000000000000000000000",
"head_block_age": "12 days old",
"next_maintenance_time": "45 years ago",
"chain_id": "0f8b631d7a9dfebf16d6776fab96b629a14429762bf9c3eb95db1e4e4af637a4",
"participation": "100.00000000000000000",
1179254ms th_a application.cpp:388 handle_block ] Got block #314806 with time 2015-09-30T08:19:39 from network with latency of 255 ms from delegate-dev3.btsnow
1179486ms th_a application.cpp:518 get_item ] Serving up block #314806
1182227ms th_a application.cpp:388 handle_block ] Got block #314807 with time 2015-09-30T08:19:42 from network with latency of 227 ms from init6
1182462ms th_a application.cpp:518 get_item ] Serving up block #314807
1185107ms th_a application.cpp:388 handle_block ] Got block #314808 with time 2015-09-30T08:19:45 from network with latency of 108 ms from init11
1185338ms th_a application.cpp:518 get_item ] Serving up block #314808
1188176ms th_a application.cpp:388 handle_block ] Got block #314809 with time 2015-09-30T08:19:48 from network with latency of 177 ms from init1
1188411ms th_a application.cpp:518 get_item ] Serving up block #314809
1191249ms th_a application.cpp:388 handle_block ] Got block #314810 with time 2015-09-30T08:19:51 from network with latency of 250 ms from init2
1194378ms th_a application.cpp:388 handle_block ] Got block #314811 with time 2015-09-30T08:19:54 from network with latency of 379 ms from delegate-clayop
1197207ms th_a application.cpp:388 handle_block ] Got block #314812 with time 2015-09-30T08:19:57 from network with latency of 207 ms from wackou
1200221ms th_a application.cpp:388 handle_block ] Got block #314813 with time 2015-09-30T08:20:00 from network with latency of 222 ms from init4
1200461ms th_a application.cpp:518 get_item ] Serving up block #314813
1203250ms th_a application.cpp:388 handle_block ] Got block #314814 with time 2015-09-30T08:20:03 from network with latency of 251 ms from init3
1207182ms th_a application.cpp:388 handle_block ] Got block #314814 with time 2015-09-30T08:20:06 from network with latency of 1182 ms from mr.agsexplorer
2209930ms th_a application.cpp:699 get_blockchain_synop ] synopsis: ["0004cdb3d2d7eb80189f83633c4cb9edfa428ca8","0004cdb90bdf86f48dd2d6695fe8f5edb3322e67","0004cdbcecb0901e9e26dabd0c5610f648a14108","0004cdbe86f24753a0133e23c6e26ebd3c05bf9e"]
...890491ms th_a application.cpp:518 get_item ] Serving up block #314811
890491ms th_a application.cpp:518 get_item ] Serving up block #314812
890491ms th_a application.cpp:518 get_item ] Serving up block #314813
890491ms th_a application.cpp:518 get_item ] Serving up block #314814
898941ms th_a application.cpp:699 get_blockchain_synop ] synopsis: ["0004cdb4a094bb65e4ddf13138fca35a9c5b4141","0004cdbab1c582f8ccc1e6396f0d62cb9c047323","0004cdbde932bd1fbe5e160b4dd22b1a9b1f8135","0004cdbe86f24753a0133e23c6e26ebd3c05bf9e"]
witness_node: /home/daniel/Crypto/graphene/libraries/chain/db_market.cpp:290: bool graphene::chain::database::fill_order(const graphene::chain::call_order_object&, const graphene::chain::asset&, const graphene::chain::asset&): Assertion `order.get_collateral() >= pays' failed.
./run_node: line 1: 1063 Aborted (core dumped)
last_block->id(): 0004cdbde932bd1fbe5e160b4dd22b1a9b1f8135 last_block->block_num(): 314813
"head_block_num": 314836,
"head_block_id": "0004cdd471f6d83da5ddcfc8d93b65262538643a",
"head_block_age": "4 hours old"
"head_block_num": 314814,
"head_block_id": "0004cdbe86f24753a0133e23c6e26ebd3c05bf9e",
"head_block_age": "4 hours old",
All of my nodes were hung when I came in this morning. Looks like a major issue was discovered and we are looking into the cause.
Based upon the crash reports it looks like someone attempted a force settle.
"head_block_num": 313187,
"head_block_id": "0004c763d993633488acb5fdd6ec362112aae9b9",
"head_block_age": "6 hours old",
Server has disconnected us.
9 canceled_exception: Canceled
cancellation reason: [none given]
{"reason":"[none given]"}
th_a thread_d.hpp:463 start_next_fiber
Sounds like progress ..You seem to comfort me
We are getting closer by the minute
90.212% 284000 of 314814
1858849ms th_a db_block.cpp:263 push_proposal ] e
1858850ms th_a db_update.cpp:140 clear_expired_propos ] Failed to apply proposed transaction on its expiration. Deleting it.
{"id":"1.10.1","expiration_time":"2015-09-29T05:15:00","review_period_time":"2015-09-29T04:15:00","proposed_transaction":{"ref_block_num":0,"ref_block_prefix":0,"expiration":"
2015-09-29T05:15:00","operations":[[31,{"fee":{"amount":1000000,"asset_id":"1.3.0"},"new_parameters":{"current_fees":{"parameters":[[0,{"fee":2000000,"price_per_kbyte":1000000
}],[1,{"fee":500000}],[2,{"fee":0}],[3,{"fee":2000000}],[4,{}],[5,{"basic_fee":0,"premium_fee":200000000,"price_per_kbyte":100000}],[6,{"fee":2000000,"price_per_kbyte":100000}
],[7,{"fee":300000}],[8,{"membership_annual_fee":200000000,"membership_lifetime_fee":1000000000}],[9,{"fee":50000000}],[10,{"symbol3":"50000000000","symbol4":"30000000000","lo
ng_symbol":500000000,"price_per_kbyte":10}],[11,{"fee":50000000,"price_per_kbyte":10}],[12,{"fee":50000000}],[13,{"fee":50000000}],[14,{"fee":2000000,"price_per_kbyte":100000}
],[15,{"fee":2000000}],[16,{"fee":100000}],[17,{"fee":10000000}],[18,{"fee":50000000}],[19,{"fee":100000}],[20,{"fee":500000000}],[21,{"fee":2000000}],[22,{"fee":2000000,"pric
e_per_kbyte":10}],[23,{"fee":100000,"price_per_kbyte":10}],[24,{"fee":100000}],[25,{"fee":100000}],[26,{"fee":2000000}],[27,{"fee":0,"price_per_kbyte":10}],[28,{"fee":50000000
0}],[29,{"fee":100000}],[30,{"fee":100000}],[31,{"fee":2000000}],[32,{"fee":500000000}],[33,{"fee":100000}],[34,{"fee":100000}],[35,{"fee":100000,"price_per_kbyte":10}],[36,{"
fee":2000000}],[37,{}],[38,{"fee":500000,"price_per_kbyte":10}],[39,{"fee":500000,"price_per_output":500000}]],"scale":5000},"block_interval":3,"maintenance_interval":5400,"ma
intenance_skip_slots":4,"committee_proposal_review_period":300,"maximum_transaction_size":65536,"maximum_block_size":10485760,"maximum_time_until_expiration":86400,"maximum_pr
oposal_lifetime":2419200,"maximum_asset_whitelist_authorities":10,"maximum_asset_feed_publishers":10,"maximum_witness_count":1001,"maximum_committee_count":1001,"maximum_autho
rity_membership":10,"reserve_percent_of_fee":2000,"network_percent_of_fee":2000,"lifetime_referrer_percent_of_fee":3000,"cashback_vesting_period_seconds":7776000,"cashback_ves
ting_threshold":10000000,"count_non_member_votes":true,"allow_non_member_whitelists":false,"witness_pay_per_block":500000,"worker_budget_per_day":"50000000000","max_predicate_
opcode":1,"fee_liquidation_threshold":10000000,"accounts_per_fee_scale":1000,"account_fee_scale_bitshifts":4,"max_authority_depth":2,"extensions":[]}}]],"extensions":[]},"requ
ired_active_approvals":["1.2.0"],"available_active_approvals":["1.2.102","1.2.103","1.2.104","1.2.105","1.2.106","1.2.107","1.2.108","1.2.109","1.2.110"],"required_owner_appro
vals":[],"available_owner_approvals":[],"available_key_approvals":[]}
10 assert_exception: Assert Exception
itr->get_balance() >= -delta: Insufficient Balance: committee-account's balance of 0 CORE is less than required 10 CORE
{"a":"committee-account","b":"0 CORE","r":"10 CORE"}
th_a db_balance.cpp:67 adjust_balance
{"account":"1.2.0","delta":{"amount":-1000000,"asset_id":"1.3.0"}}
th_a db_balance.cpp:73 adjust_balance
{}
th_a evaluator.cpp:42 start_evaluate
intenance_skip_slots":4,"committee_proposal_review_period":300,"maximum_transaction_size":65536,"maximum_block_size":10485760,"maximum_time_until_expiration":86400,"[540/1909]
oposal_lifetime":2419200,"maximum_asset_whitelist_authorities":10,"maximum_asset_feed_publishers":10,"maximum_witness_count":1001,"maximum_committee_count":1001,"maximum_autho
rity_membership":10,"reserve_percent_of_fee":2000,"network_percent_of_fee":2000,"lifetime_referrer_percent_of_fee":3000,"cashback_vesting_period_seconds":7776000,"cashback_ves
ting_threshold":10000000,"count_non_member_votes":true,"allow_non_member_whitelists":false,"witness_pay_per_block":500000,"worker_budget_per_day":"50000000000","max_predicate_
opcode":1,"fee_liquidation_threshold":10000000,"accounts_per_fee_scale":1000,"account_fee_scale_bitshifts":4,"max_authority_depth":2,"extensions":[]}}]],"extensions":[]},"requ
ired_active_approvals":["1.2.0"],"available_active_approvals":["1.2.102","1.2.103","1.2.104","1.2.105","1.2.106","1.2.107","1.2.108","1.2.109","1.2.110"],"required_owner_appro
vals":[],"available_owner_approvals":[],"available_key_approvals":[]}
10 assert_exception: Assert Exception
itr->get_balance() >= -delta: Insufficient Balance: committee-account's balance of 0 CORE is less than required 10 CORE
{"a":"committee-account","b":"0 CORE","r":"10 CORE"}
th_a db_balance.cpp:67 adjust_balance
{"account":"1.2.0","delta":{"amount":-1000000,"asset_id":"1.3.0"}}
th_a db_balance.cpp:73 adjust_balance
{}
th_a evaluator.cpp:42 start_evaluate
{}
th_a db_block.cpp:621 apply_operation
{"proposal":{"id":"1.10.1","expiration_time":"2015-09-29T05:15:00","review_period_time":"2015-09-29T04:15:00","proposed_transaction":{"ref_block_num":0,"ref_block_prefix":
0,"expiration":"2015-09-29T05:15:00","operations":[[31,{"fee":{"amount":1000000,"asset_id":"1.3.0"},"new_parameters":{"current_fees":{"parameters":[[0,{"fee":2000000,"price_pe
r_kbyte":1000000}],[1,{"fee":500000}],[2,{"fee":0}],[3,{"fee":2000000}],[4,{}],[5,{"basic_fee":0,"premium_fee":200000000,"price_per_kbyte":100000}],[6,{"fee":2000000,"price_pe
r_kbyte":100000}],[7,{"fee":300000}],[8,{"membership_annual_fee":200000000,"membership_lifetime_fee":1000000000}],[9,{"fee":50000000}],[10,{"symbol3":"50000000000","symbol4":"
30000000000","long_symbol":500000000,"price_per_kbyte":10}],[11,{"fee":50000000,"price_per_kbyte":10}],[12,{"fee":50000000}],[13,{"fee":50000000}],[14,{"fee":2000000,"price_pe
r_kbyte":100000}],[15,{"fee":2000000}],[16,{"fee":100000}],[17,{"fee":10000000}],[18,{"fee":50000000}],[19,{"fee":100000}],[20,{"fee":500000000}],[21,{"fee":2000000}],[22,{"fe
e":2000000,"price_per_kbyte":10}],[23,{"fee":100000,"price_per_kbyte":10}],[24,{"fee":100000}],[25,{"fee":100000}],[26,{"fee":2000000}],[27,{"fee":0,"price_per_kbyte":10}],[28
,{"fee":500000000}],[29,{"fee":100000}],[30,{"fee":100000}],[31,{"fee":2000000}],[32,{"fee":500000000}],[33,{"fee":100000}],[34,{"fee":100000}],[35,{"fee":100000,"price_per_kb
yte":10}],[36,{"fee":2000000}],[37,{}],[38,{"fee":500000,"price_per_kbyte":10}],[39,{"fee":500000,"price_per_output":500000}]],"scale":5000},"block_interval":3,"maintenance_in
terval":5400,"maintenance_skip_slots":4,"committee_proposal_review_period":300,"maximum_transaction_size":65536,"maximum_block_size":10485760,"maximum_time_until_expiration":8
6400,"maximum_proposal_lifetime":2419200,"maximum_asset_whitelist_authorities":10,"maximum_asset_feed_publishers":10,"maximum_witness_count":1001,"maximum_committee_count":100
1,"maximum_authority_membership":10,"reserve_percent_of_fee":2000,"network_percent_of_fee":2000,"lifetime_referrer_percent_of_fee":3000,"cashback_vesting_period_seconds":77760
00,"cashback_vesting_threshold":10000000,"count_non_member_votes":true,"allow_non_member_whitelists":false,"witness_pay_per_block":500000,"worker_budget_per_day":"50000000000"
,"max_predicate_opcode":1,"fee_liquidation_threshold":10000000,"accounts_per_fee_scale":1000,"account_fee_scale_bitshifts":4,"max_authority_depth":2,"extensions":[]}}]],"exten
sions":[]},"required_active_approvals":["1.2.0"],"available_active_approvals":["1.2.102","1.2.103","1.2.104","1.2.105","1.2.106","1.2.107","1.2.108","1.2.109","1.2.110"],"requ
ired_owner_approvals":[],"available_owner_approvals":[],"available_key_approvals":[]}}
th_a db_block.cpp:269 push_proposal
"head_block_num": 314814,
"head_block_id": "0004cdbe86f24753a0133e23c6e26ebd3c05bf9e",
"head_block_age": "6 hours old",
"next_maintenance_time": "6 hours ago",
"chain_id": "0f8b631d7a9dfebf16d6776fab96b629a14429762bf9c3eb95db1e4e4af637a4",
While replaying the blockchainCode: [Select]
90.212% 284000 of 314814
1858849ms th_a db_block.cpp:263 push_proposal ] e
1858850ms th_a db_update.cpp:140 clear_expired_propos ] Failed to apply proposed transaction on its expiration. Deleting it.
{"id":"1.10.1","expiration_time":"2015-09-29T05:15:00","review_period_time":"2015-09-29T04:15:00","proposed_transaction":{"ref_block_num":0,"ref_block_prefix":0,"expiration":"
2015-09-29T05:15:00","operations":[[31,{"fee":{"amount":1000000,"asset_id":"1.3.0"},"new_parameters":{"current_fees":{"parameters":[[0,{"fee":2000000,"price_per_kbyte":1000000
}],[1,{"fee":500000}],[2,{"fee":0}],[3,{"fee":2000000}],[4,{}],[5,{"basic_fee":0,"premium_fee":200000000,"price_per_kbyte":100000}],[6,{"fee":2000000,"price_per_kbyte":100000}
],[7,{"fee":300000}],[8,{"membership_annual_fee":200000000,"membership_lifetime_fee":1000000000}],[9,{"fee":50000000}],[10,{"symbol3":"50000000000","symbol4":"30000000000","lo
ng_symbol":500000000,"price_per_kbyte":10}],[11,{"fee":50000000,"price_per_kbyte":10}],[12,{"fee":50000000}],[13,{"fee":50000000}],[14,{"fee":2000000,"price_per_kbyte":100000}
],[15,{"fee":2000000}],[16,{"fee":100000}],[17,{"fee":10000000}],[18,{"fee":50000000}],[19,{"fee":100000}],[20,{"fee":500000000}],[21,{"fee":2000000}],[22,{"fee":2000000,"pric
e_per_kbyte":10}],[23,{"fee":100000,"price_per_kbyte":10}],[24,{"fee":100000}],[25,{"fee":100000}],[26,{"fee":2000000}],[27,{"fee":0,"price_per_kbyte":10}],[28,{"fee":50000000
0}],[29,{"fee":100000}],[30,{"fee":100000}],[31,{"fee":2000000}],[32,{"fee":500000000}],[33,{"fee":100000}],[34,{"fee":100000}],[35,{"fee":100000,"price_per_kbyte":10}],[36,{"
fee":2000000}],[37,{}],[38,{"fee":500000,"price_per_kbyte":10}],[39,{"fee":500000,"price_per_output":500000}]],"scale":5000},"block_interval":3,"maintenance_interval":5400,"ma
intenance_skip_slots":4,"committee_proposal_review_period":300,"maximum_transaction_size":65536,"maximum_block_size":10485760,"maximum_time_until_expiration":86400,"maximum_pr
oposal_lifetime":2419200,"maximum_asset_whitelist_authorities":10,"maximum_asset_feed_publishers":10,"maximum_witness_count":1001,"maximum_committee_count":1001,"maximum_autho
rity_membership":10,"reserve_percent_of_fee":2000,"network_percent_of_fee":2000,"lifetime_referrer_percent_of_fee":3000,"cashback_vesting_period_seconds":7776000,"cashback_ves
ting_threshold":10000000,"count_non_member_votes":true,"allow_non_member_whitelists":false,"witness_pay_per_block":500000,"worker_budget_per_day":"50000000000","max_predicate_
opcode":1,"fee_liquidation_threshold":10000000,"accounts_per_fee_scale":1000,"account_fee_scale_bitshifts":4,"max_authority_depth":2,"extensions":[]}}]],"extensions":[]},"requ
ired_active_approvals":["1.2.0"],"available_active_approvals":["1.2.102","1.2.103","1.2.104","1.2.105","1.2.106","1.2.107","1.2.108","1.2.109","1.2.110"],"required_owner_appro
vals":[],"available_owner_approvals":[],"available_key_approvals":[]}
10 assert_exception: Assert Exception
itr->get_balance() >= -delta: Insufficient Balance: committee-account's balance of 0 CORE is less than required 10 CORE
{"a":"committee-account","b":"0 CORE","r":"10 CORE"}
th_a db_balance.cpp:67 adjust_balance
{"account":"1.2.0","delta":{"amount":-1000000,"asset_id":"1.3.0"}}
th_a db_balance.cpp:73 adjust_balance
{}
th_a evaluator.cpp:42 start_evaluate
intenance_skip_slots":4,"committee_proposal_review_period":300,"maximum_transaction_size":65536,"maximum_block_size":10485760,"maximum_time_until_expiration":86400,"[540/1909]
oposal_lifetime":2419200,"maximum_asset_whitelist_authorities":10,"maximum_asset_feed_publishers":10,"maximum_witness_count":1001,"maximum_committee_count":1001,"maximum_autho
rity_membership":10,"reserve_percent_of_fee":2000,"network_percent_of_fee":2000,"lifetime_referrer_percent_of_fee":3000,"cashback_vesting_period_seconds":7776000,"cashback_ves
ting_threshold":10000000,"count_non_member_votes":true,"allow_non_member_whitelists":false,"witness_pay_per_block":500000,"worker_budget_per_day":"50000000000","max_predicate_
opcode":1,"fee_liquidation_threshold":10000000,"accounts_per_fee_scale":1000,"account_fee_scale_bitshifts":4,"max_authority_depth":2,"extensions":[]}}]],"extensions":[]},"requ
ired_active_approvals":["1.2.0"],"available_active_approvals":["1.2.102","1.2.103","1.2.104","1.2.105","1.2.106","1.2.107","1.2.108","1.2.109","1.2.110"],"required_owner_appro
vals":[],"available_owner_approvals":[],"available_key_approvals":[]}
10 assert_exception: Assert Exception
itr->get_balance() >= -delta: Insufficient Balance: committee-account's balance of 0 CORE is less than required 10 CORE
{"a":"committee-account","b":"0 CORE","r":"10 CORE"}
th_a db_balance.cpp:67 adjust_balance
{"account":"1.2.0","delta":{"amount":-1000000,"asset_id":"1.3.0"}}
th_a db_balance.cpp:73 adjust_balance
{}
th_a evaluator.cpp:42 start_evaluate
{}
th_a db_block.cpp:621 apply_operation
{"proposal":{"id":"1.10.1","expiration_time":"2015-09-29T05:15:00","review_period_time":"2015-09-29T04:15:00","proposed_transaction":{"ref_block_num":0,"ref_block_prefix":
0,"expiration":"2015-09-29T05:15:00","operations":[[31,{"fee":{"amount":1000000,"asset_id":"1.3.0"},"new_parameters":{"current_fees":{"parameters":[[0,{"fee":2000000,"price_pe
r_kbyte":1000000}],[1,{"fee":500000}],[2,{"fee":0}],[3,{"fee":2000000}],[4,{}],[5,{"basic_fee":0,"premium_fee":200000000,"price_per_kbyte":100000}],[6,{"fee":2000000,"price_pe
r_kbyte":100000}],[7,{"fee":300000}],[8,{"membership_annual_fee":200000000,"membership_lifetime_fee":1000000000}],[9,{"fee":50000000}],[10,{"symbol3":"50000000000","symbol4":"
30000000000","long_symbol":500000000,"price_per_kbyte":10}],[11,{"fee":50000000,"price_per_kbyte":10}],[12,{"fee":50000000}],[13,{"fee":50000000}],[14,{"fee":2000000,"price_pe
r_kbyte":100000}],[15,{"fee":2000000}],[16,{"fee":100000}],[17,{"fee":10000000}],[18,{"fee":50000000}],[19,{"fee":100000}],[20,{"fee":500000000}],[21,{"fee":2000000}],[22,{"fe
e":2000000,"price_per_kbyte":10}],[23,{"fee":100000,"price_per_kbyte":10}],[24,{"fee":100000}],[25,{"fee":100000}],[26,{"fee":2000000}],[27,{"fee":0,"price_per_kbyte":10}],[28
,{"fee":500000000}],[29,{"fee":100000}],[30,{"fee":100000}],[31,{"fee":2000000}],[32,{"fee":500000000}],[33,{"fee":100000}],[34,{"fee":100000}],[35,{"fee":100000,"price_per_kb
yte":10}],[36,{"fee":2000000}],[37,{}],[38,{"fee":500000,"price_per_kbyte":10}],[39,{"fee":500000,"price_per_output":500000}]],"scale":5000},"block_interval":3,"maintenance_in
terval":5400,"maintenance_skip_slots":4,"committee_proposal_review_period":300,"maximum_transaction_size":65536,"maximum_block_size":10485760,"maximum_time_until_expiration":8
6400,"maximum_proposal_lifetime":2419200,"maximum_asset_whitelist_authorities":10,"maximum_asset_feed_publishers":10,"maximum_witness_count":1001,"maximum_committee_count":100
1,"maximum_authority_membership":10,"reserve_percent_of_fee":2000,"network_percent_of_fee":2000,"lifetime_referrer_percent_of_fee":3000,"cashback_vesting_period_seconds":77760
00,"cashback_vesting_threshold":10000000,"count_non_member_votes":true,"allow_non_member_whitelists":false,"witness_pay_per_block":500000,"worker_budget_per_day":"50000000000"
,"max_predicate_opcode":1,"fee_liquidation_threshold":10000000,"accounts_per_fee_scale":1000,"account_fee_scale_bitshifts":4,"max_authority_depth":2,"extensions":[]}}]],"exten
sions":[]},"required_active_approvals":["1.2.0"],"available_active_approvals":["1.2.102","1.2.103","1.2.104","1.2.105","1.2.106","1.2.107","1.2.108","1.2.109","1.2.110"],"requ
ired_owner_approvals":[],"available_owner_approvals":[],"available_key_approvals":[]}}
th_a db_block.cpp:269 push_proposal
4. Attempt to force settle resulted in a condition that was previously believed to be impossible (a black swan)
4. Attempt to force settle resulted in a condition that was previously believed to be impossible (a black swan)
+5% Very good that this happened, so this case can be properly reviewed and re-tested.
4. Attempt to force settle resulted in a condition that was previously believed to be impossible (a black swan)
+5% Very good that this happened, so this case can be properly reviewed and re-tested.
+5%
Yep I am glad it occurred on the test-network too !
I have just checked in a fix for this issue, but it resulted in a hard fork several days ago because BitUSD technically had a black swan event back then.
Theoretical is working on unit tests for the edge cases we have identified so that we can verify the fix works in all cases.
We will launch a new test network tomorrow based upon this fix and the party can continue.
Yes I'm running with debug build. I remember the log said replay took about 60 seconds, but CPU kept at 100% for 75 seconds. Maybe the 15 more seconds are before or after replay.It took 75 seconds to replay the 280k-blocks chain.
Maybe on a SLOW vps, but on my machine it takes a mere118.79 seconds.
In Debug build it took me 59 seconds to reindex.
In Release build it took me 8.79 seconds to reindex.
There were a total of 444758 operations in the blockchain which means we are averaging over 50K TPS.
The average maintenance calculation time is: 7 ms
1.5 seconds were spent doing maintenance calculations (on the hour, instead of every day) leaving us at 62K TPS average
The vast majority of blocks have been empty which means our TPS average is being diluted by block processing overhead.
It looks like in Debug mode we process closer to 10K TPS.
I have just checked in a fix for this issue, but it resulted in a hard fork several days ago because BitUSD technically had a black swan event back then.I'm unable to access my server right now. Wish I could be back when the new test network is ready.
Theoretical is working on unit tests for the edge cases we have identified so that we can verify the fix works in all cases.
We will launch a new test network tomorrow based upon this fix and the party can continue.
... it resulted in a hard fork several days ago because BitUSD technically had a black swan event back then.
@abit: Congratulations ... it seems the both of us were the very reason for the first every blockchain based black swan event :-P
... it resulted in a hard fork several days ago because BitUSD technically had a black swan event back then.
and why have we not identified the event just after it occurred and only a couple of days later?
... it resulted in a hard fork several days ago because BitUSD technically had a black swan event back then.
and why have we not identified the event just after it occurred and only a couple of days later?
Because the market was thin
Sent from my iPhone using Tapatalk