BitShares Forum
Main => General Discussion => Topic started by: illya13 on May 07, 2016, 04:25:26 pm
-
Hi,
having issues to start private testnet over 3 nodes with default genesis.
Starting 1st node with --enable-stale-production
It starts to generate blocks
(using 3 witness ids on it 1.6.1 - 1.6.3)
Starting 2nd node with another 4 witness ids 1.6.4 - 1.6.7 and WITHOUT --enable-stale-production.
After some delay this 2nd node crash with
witness_node: /graphene/libraries/chain/block_database.cpp:134: graphene::chain::block_id_type graphene::chain::block_database::fetch_block_id(uint32_t) const: Assertion `block_num != 0' failed.
Any mistakes in my flow ?
Thank you
-
try start with --resync-blockchain or --replay-blockchain?
By the way, I believe you have specified "-s ip.of.first.node:port" in the second node?
-
yes,
I was using 'master' from 'cryptonomex/graphene' compiled with 'gcc-4.8'
Now trying to build again but from 'bitshares/bitshares-2' and 'gcc-4.9'
-
in p2p.logs
2016-05-08T09:29:21 p2p:accept_loop accept_loop ] accepted inbound connection from 127.0.0.1:11010 node.cpp:4178
2016-05-08T09:29:21 p2p:accept_connection_task scope_logger ] entering peer_connection::accept_connection() peer_connection.cpp:190
2016-05-08T09:29:21 p2p:accept_connection_task accept_connection ] established inbound connection from 127.0.0.1:11010, sending hello peer_connection.cpp:213
2016-05-08T09:29:21 p2p:accept_connection_task ~scope_logger ] leaving peer_connection::accept_connection() peer_connection.cpp:191
2016-05-08T09:29:21 p2p:message read_loop on_message ] handling message hello_message_type fa81c4e7e49c7c4a7cdcb4782c5d78e8e926adf0 size 529 from peer 127.0.0.1:11010 node.cpp:1757
2016-05-08T09:29:21 p2p:message read_loop on_hello_message ] Received a hello_message from peer 127.0.0.1:11010, sending reply to accept connection node.cpp:2073
2016-05-08T09:29:21 p2p:message read_loop on_message ] handling message connection_accepted_message_type 9c1185a5c5e9fc54612808977ee8f548b2258d31 size 0 from peer 127.0.0.1:11010 node.cpp:1757
2016-05-08T09:29:21 p2p:message read_loop on_connection_accept ] Received a connection_accepted in response to my "hello" from 127.0.0.1:11010 node.cpp:2098
2016-05-08T09:29:21 p2p:message read_loop on_message ] handling message address_request_message_type 9c1185a5c5e9fc54612808977ee8f548b2258d31 size 0 from peer 127.0.0.1:11010 node.cpp:1757
2016-05-08T09:29:21 p2p:message read_loop on_address_request_m ] Received an address request message node.cpp:2155
2016-05-08T09:29:21 p2p:message read_loop on_message ] handling message address_message_type c81b94933420221a7ac004a90242d8b1d3e5070d size 1 from peer 127.0.0.1:11010 node.cpp:1757
2016-05-08T09:29:21 p2p:message read_loop on_address_message ] Received an address message containing 0 addresses node.cpp:2184
2016-05-08T09:29:21 p2p:message read_loop read_loop ] message transmission failed 10 assert_exception: Assert Exception
e.block_id != block_id_type(): Empty block_id in block_database (maybe corrupt on disk?)
{}
th_a block_database.cpp:144 fetch_block_id
{"block_num":0}
th_a db_block.cpp:61 get_block_id_for_num
{}
th_a application.cpp:844 get_blockchain_synopsis message_oriented_connection.cpp:188
2016-05-08T09:29:21 p2p:message read_loop read_loop ] disconnected 10 assert_exception: Assert Exception
e.block_id != block_id_type(): Empty block_id in block_database (maybe corrupt on disk?)
{}
th_a block_database.cpp:144 fetch_block_id
{"block_num":0}
th_a db_block.cpp:61 get_block_id_for_num
{}
th_a application.cpp:844 get_blockchain_synopsis message_oriented_connection.cpp:205
2016-05-08T09:29:21 p2p:message read_loop on_connection_closed ] Remote peer 127.0.0.1:11010 closed their connection to us node.cpp:2960
2016-05-08T09:29:21 p2p:message read_loop display_current_conn ] Currently have 0 of [20/200] connections node.cpp:1731
2016-05-08T09:29:21 p2p:message read_loop display_current_conn ] my id is 51fc7c0537feccf9a92390c22941393a5645e60c2b6632d589748dce6d3e28a219 node.cpp:1732
2016-05-08T09:29:21 p2p:message read_loop trigger_p2p_network_ ] Triggering connect loop now node.cpp:988
2016-05-08T09:29:21 p2p:message read_loop schedule_peer_for_de ] scheduling peer for deletion: 127.0.0.1:11010 (this will not block) node.cpp:1640
2016-05-08T09:29:21 p2p:message read_loop schedule_peer_for_de ] asyncing delayed_peer_deletion_task to delete 1 peers node.cpp:1645
2016-05-08T09:29:21 p2p:delayed_peer_deletion_task delayed_peer_deletio ] beginning an iteration of delayed_peer_deletion_task with 1 in queue node.cpp:1604
2016-05-08T09:29:21 p2p:delayed_peer_deletion_task destroy ] calling close_connection() peer_connection.cpp:127
2016-05-08T09:29:21 p2p:delayed_peer_deletion_task destroy ] close_connection completed normally peer_connection.cpp:129
2016-05-08T09:29:21 p2p:delayed_peer_deletion_task destroy ] canceling _send_queued_messages task peer_connection.cpp:142
2016-05-08T09:29:21 p2p:delayed_peer_deletion_task destroy ] cancel_and_wait completed normally peer_connection.cpp:144
2016-05-08T09:29:21 p2p:delayed_peer_deletion_task destroy ] canceling accept_or_connect_task peer_connection.cpp:157
2016-05-08T09:29:21 p2p:delayed_peer_deletion_task destroy ] accept_or_connect_task completed normally peer_connection.cpp:159
2016-05-08T09:29:21 p2p:delayed_peer_deletion_task destroy_connection ] in destroy_connection() for message_oriented_connection.cpp:286
2016-05-08T09:29:21 p2p:delayed_peer_deletion_task destroy_connection ] Exception thrown while canceling message_oriented_connection's read_loop, ignoring: {"code":0,"name":"exception","message":"unspecified","stack":[{"context":{"level":"warn","file":"message_oriented_connection.cpp","line":207,"method":"read_loop","hostname":"","thread_name":"p2p","timestamp":"2016-05-08T09:29:21"},"format":"disconnected: ${e}","data":{"e":"10 assert_exception: Assert Exception\ne.block_id != block_id_type(): Empty block_id in block_database (maybe corrupt on disk?)\n {}\n th_a block_database.cpp:144 fetch_block_id\n\n {\"block_num\":0}\n th_a db_block.cpp:61 get_block_id_for_num\n\n {}\n th_a application.cpp:844 get_blockchain_synopsis"}}]} message_oriented_connection.cpp:299
2016-05-08T09:29:21 p2p:delayed_peer_deletion_task destroy_connection ] in destroy_connection() for message_oriented_connection.cpp:286
2016-05-08T09:29:21 p2p:delayed_peer_deletion_task destroy_connection ] Exception thrown while canceling message_oriented_connection's read_loop, ignoring: {"code":0,"name":"exception","message":"unspecified","stack":[{"context":{"level":"warn","file":"message_oriented_connection.cpp","line":207,"method":"read_loop","hostname":"","thread_name":"p2p","timestamp":"2016-05-08T09:29:21"},"format":"disconnected: ${e}","data":{"e":"10 assert_exception: Assert Exception\ne.block_id != block_id_type(): Empty block_id in block_database (maybe corrupt on disk?)\n {}\n th_a block_database.cpp:144 fetch_block_id\n\n {\"block_num\":0}\n th_a db_block.cpp:61 get_block_id_for_num\n\n {}\n th_a application.cpp:844 get_blockchain_synopsis"}}]} message_oriented_connection.cpp:299
2016-05-08T09:29:21 p2p:delayed_peer_deletion_task delayed_peer_deletio ] leaving delayed_peer_deletion_task node.cpp:1607
-
Pinging @xeroc @cube .
-
Corrupted chain or did you compile & run in 64bit?
-
Hi, it seems like I found an issue
I was deleting 'object_database' in / after initialization rounds.
Was thinking it will be moved or places into 'data/blockchain/object_database' dir.
But it looks like this one in a root is still used somehow
PS. / because I'm running it from / like
/bitshares-2/programs/witness_node/witness_node --data-dir /data -w '"1.6.1"' -w '"1.6.2"' -w '"1.6.3"' --enable-stale-production
-
still not working !
Other issue - when I restart 1st node. It starts to generate blocks from the beginning
Is that expected ?
3284954ms th_a object_database.cpp:94 open ] Opening object database from /data/blockchain ...
3284964ms th_a object_database.cpp:100 open ] Done opening object database.
3284964ms th_a db_management.cpp:128 open ] last_block->id(): 0000000156888b2349d469385656dae4b09c6f39 last_block->block_num(): 1
3284964ms th_a db_management.cpp:129 open ] head_block_id(): 0000000156888b2349d469385656dae4b09c6f39 head_block_num(): 1
3284964ms th_a thread.cpp:95 thread ] name:ntp tid:140498814899968
3284964ms th_a thread.cpp:95 thread ] name:p2p tid:140498798114560
3284966ms th_a application.cpp:189 reset_p2p_node ] Configured p2p node to listen on 0.0.0.0:11010
3284967ms th_a application.cpp:241 reset_websocket_serv ] Configured websocket rpc to listen on 0.0.0.0:11011
3284967ms th_a witness.cpp:122 plugin_startup ] witness plugin: plugin_startup() begin
3284967ms th_a witness.cpp:129 plugin_startup ] Launching block production for 3 witnesses.
3284967ms th_a witness.cpp:140 plugin_startup ] witness plugin: plugin_startup() end
3284967ms th_a main.cpp:179 main ] Started witness node on a chain with 1 blocks.
-
for bootstrapping you need to have one 'master node' that serves as a seed node for the p2p network and produces the blocks..
--enable-stale-production forces the master node to produce blocks (assuming the correct witness keys have been installed)
--p2p-endpoint IP:PORT let's you open the P2P ports of the master node so that it can be a 'seed' node for entering the p2p network (ip should be 0.0.0.0)
then you can connect new nodes from outside by providing
--seed-node <IP-of-seed>:PORT ...
make sure that you use the same repositories for the seed and the client nodes so that they use the same genesis file and have the same chain id.
-
Hi, it seems like I found an issue
I was deleting 'object_database' in / after initialization rounds.
Was thinking it will be moved or places into 'data/blockchain/object_database' dir.
But it looks like this one in a root is still used somehow
PS. / because I'm running it from / like
/bitshares-2/programs/witness_node/witness_node --data-dir /data -w '"1.6.1"' -w '"1.6.2"' -w '"1.6.3"' --enable-stale-production
There is a bug which causes 'object_database' be created in current working directory. Just leave it there, or cd to your desired directory then run `/your_path/witness_node`.
still not working !
Other issue - when I restart 1st node. It starts to generate blocks from the beginning
Is that expected ?
Something is wrong.. More logs please?
for bootstrapping you need to have one 'master node' that serves as a seed node for the p2p network and produces the blocks..
--enable-stale-production forces the master node to produce blocks (assuming the correct witness keys have been installed)
--p2p-endpoint IP:PORT let's you open the P2P ports of the master node so that it can be a 'seed' node for entering the p2p network (ip should be 0.0.0.0)
then you can connect new nodes from outside by providing
--seed-node <IP-of-seed>:PORT ...
make sure that you use the same repositories for the seed and the client nodes so that they use the same genesis file and have the same chain id.
I think OP has already did these..
//Edit: perhaps better move this thread to 'Help and technical support' sub-forum.
-
did all that
for bootstrapping you need to have one 'master node' that serves as a seed node for the p2p network and produces the blocks..
--enable-stale-production forces the master node to produce blocks (assuming the correct witness keys have been installed)
--p2p-endpoint IP:PORT let's you open the P2P ports of the master node so that it can be a 'seed' node for entering the p2p network (ip should be 0.0.0.0)
then you can connect new nodes from outside by providing
--seed-node <IP-of-seed>:PORT ...
make sure that you use the same repositories for the seed and the client nodes so that they use the same genesis file and have the same chain id.
-
Can we concentrate on a simplest scenario please ?
One node ONLY. Start, Ctrl+C, start again.
Should it continue to generate blocks or should it start from 1 ?
Logs attached
root@c478e47bfe87:/bitshares# /bitshares-2/programs/witness_node/witness_node --data-dir /data -w '"1.6.1"' -w '"1.6.2"' -w '"1.6.3"' --enable-stale-production
2939217ms th_a witness.cpp:89 plugin_initialize ] witness plugin: plugin_initialize() begin
2939218ms th_a witness.cpp:99 plugin_initialize ] key_id_to_wif_pair: ["BTS6MRyAjQq8ud7hVNYcfnVPJqcVpscN5So8BhtHuGYqET5GDW5CV","5KQwrPbwdL6PhXujxW37FSSQZ1JiwsST4cqQzDeyXtP79zkvFD3"]
2939218ms th_a witness.cpp:117 plugin_initialize ] witness plugin: plugin_initialize() end
2939218ms th_a object_database.cpp:94 open ] Opening object database from /data/blockchain ...
2939227ms th_a object_database.cpp:100 open ] Done opening object database.
2939228ms th_a thread.cpp:95 thread ] name:ntp tid:140323165701888
2939228ms th_a thread.cpp:95 thread ] name:p2p tid:140323148916480
2939230ms th_a application.cpp:189 reset_p2p_node ] Configured p2p node to listen on 0.0.0.0:11010
2939231ms th_a application.cpp:241 reset_websocket_serv ] Configured websocket rpc to listen on 0.0.0.0:11011
2939231ms th_a witness.cpp:122 plugin_startup ] witness plugin: plugin_startup() begin
2939231ms th_a witness.cpp:129 plugin_startup ] Launching block production for 3 witnesses.
********************************
* *
* ------- NEW CHAIN ------ *
* - Welcome to Graphene! - *
* ------------------------ *
* *
********************************
Your genesis seems to have an old timestamp
Please consider using the --genesis-timestamp option to give your genesis a recent timestamp
2939232ms th_a witness.cpp:140 plugin_startup ] witness plugin: plugin_startup() end
2939232ms th_a main.cpp:179 main ] Started witness node on a chain with 0 blocks.
2939232ms th_a main.cpp:180 main ] Chain ID is a2b1fc46abd38dde78382a03e023681630a4bd5816486f8d92b2a740ab591083
2939241ms ntp ntp.cpp:177 read_loop ] ntp_delta_time updated to -3934 us
2940000ms th_a witness.cpp:194 block_production_loo ] Not producing block because it isn't my turn
2941004ms th_a witness.cpp:194 block_production_loo ] Not producing block because it isn't my turn
2942004ms th_a witness.cpp:194 block_production_loo ] Not producing block because it isn't my turn
2943004ms th_a witness.cpp:194 block_production_loo ] Not producing block because it isn't my turn
2944004ms th_a witness.cpp:194 block_production_loo ] Not producing block because it isn't my turn
2945004ms th_a witness.cpp:194 block_production_loo ] Not producing block because it isn't my turn
2946004ms th_a witness.cpp:194 block_production_loo ] Not producing block because it isn't my turn
2947007ms th_a witness.cpp:188 block_production_loo ] Generated block #1 with timestamp 2016-05-09T09:49:07 at time 2016-05-09T09:49:07
2948004ms th_a witness.cpp:197 block_production_loo ] Not producing block because slot has not yet arrived
2949004ms th_a witness.cpp:197 block_production_loo ] Not producing block because slot has not yet arrived
2950004ms th_a witness.cpp:197 block_production_loo ] Not producing block because slot has not yet arrived
2951004ms th_a witness.cpp:188 block_production_loo ] Generated block #2 with timestamp 2016-05-09T09:49:11 at time 2016-05-09T09:49:11
2952004ms th_a witness.cpp:188 block_production_loo ] Generated block #3 with timestamp 2016-05-09T09:49:12 at time 2016-05-09T09:49:12
2953004ms th_a witness.cpp:194 block_production_loo ] Not producing block because it isn't my turn
2954004ms th_a witness.cpp:194 block_production_loo ] Not producing block because it isn't my turn
2955004ms th_a witness.cpp:194 block_production_loo ] Not producing block because it isn't my turn
2955441ms asio main.cpp:170 operator() ] Caught SIGINT attempting to exit cleanly
2955442ms th_a main.cpp:183 main ] Exiting from signal 2
root@c478e47bfe87:/bitshares# /bitshares-2/programs/witness_node/witness_node --data-dir /data -w '"1.6.1"' -w '"1.6.2"' -w '"1.6.3"' --enable-stale-production
2958020ms th_a witness.cpp:89 plugin_initialize ] witness plugin: plugin_initialize() begin
2958021ms th_a witness.cpp:99 plugin_initialize ] key_id_to_wif_pair: ["BTS6MRyAjQq8ud7hVNYcfnVPJqcVpscN5So8BhtHuGYqET5GDW5CV","5KQwrPbwdL6PhXujxW37FSSQZ1JiwsST4cqQzDeyXtP79zkvFD3"]
2958021ms th_a witness.cpp:117 plugin_initialize ] witness plugin: plugin_initialize() end
2958021ms th_a object_database.cpp:94 open ] Opening object database from /data/blockchain ...
2958031ms th_a object_database.cpp:100 open ] Done opening object database.
2958031ms th_a db_management.cpp:128 open ] last_block->id(): 00000001f21bcc33b71c1cb283244086c35d28cb last_block->block_num(): 1
2958032ms th_a db_management.cpp:129 open ] head_block_id(): 00000001f21bcc33b71c1cb283244086c35d28cb head_block_num(): 1
2958032ms th_a thread.cpp:95 thread ] name:ntp tid:140327193937664
2958032ms ntp ntp.cpp:202 read_loop ] exiting ntp read_loop
2958033ms th_a thread.cpp:95 thread ] name:p2p tid:140327109256960
2958034ms th_a application.cpp:189 reset_p2p_node ] Configured p2p node to listen on 0.0.0.0:11010
2958034ms th_a application.cpp:241 reset_websocket_serv ] Configured websocket rpc to listen on 0.0.0.0:11011
2958034ms th_a witness.cpp:122 plugin_startup ] witness plugin: plugin_startup() begin
2958034ms th_a witness.cpp:129 plugin_startup ] Launching block production for 3 witnesses.
2958034ms th_a witness.cpp:140 plugin_startup ] witness plugin: plugin_startup() end
2958034ms th_a main.cpp:179 main ] Started witness node on a chain with 1 blocks.
2958034ms th_a main.cpp:180 main ] Chain ID is a2b1fc46abd38dde78382a03e023681630a4bd5816486f8d92b2a740ab591083
2959000ms th_a witness.cpp:194 block_production_loo ] Not producing block because it isn't my turn
2960000ms th_a witness.cpp:194 block_production_loo ] Not producing block because it isn't my turn
2961000ms th_a witness.cpp:188 block_production_loo ] Generated block #2 with timestamp 2016-05-09T09:49:21 at time 2016-05-09T09:49:21
2962000ms th_a witness.cpp:188 block_production_loo ] Generated block #3 with timestamp 2016-05-09T09:49:22 at time 2016-05-09T09:49:22
2962717ms asio main.cpp:170 operator() ] Caught SIGINT attempting to exit cleanly
2962717ms th_a main.cpp:183 main ] Exiting from signal 2
-
If I recall correctly, the genesis block has 11(!) initial witnesses. So you need to have
-w '"1.6.0"' -w '"1.6.1"' -w '"1.6.2"' -w '"1.6.3"' -w '"1.6.4"' -w '"1.6.5"' -w '"1.6.6"' -w '"1.6.7"' -w '"1.6.8"' -w '"1.6.9"' -w '"1.6.10"'
-
Can we concentrate on a simplest scenario please ?
One node ONLY. Start, Ctrl+C, start again.
Should it continue to generate blocks or should it start from 1 ?
Logs attached
Can you please wait for longer before press ctrl+C? For example 20 blocks? I believe it has something to do with "irreversible block".
-
Managed to solve both issues.
1) Added cross references by -s from each of 3 nodes to each
2) Waited longer
3) Did some set of restarts and cleanups
That's it. Very strange. But it works now.
Thank you very much.