4021
General Discussion / Re: Test Net for Advanced Users
« on: August 22, 2015, 09:52:58 am »
Run out of disk space
p2p.log too large.

p2p.log too large.
This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.
2015-08-21T09:07:54 p2p:p2p_network_connect_loop display_current_conn ] my id is 2a9dd11186ee39434fc36139417a1a83ab671f79cc9a3d112f6c49f8302ccf02a1 node.cpp:1626
2015-08-21T09:07:54 p2p:p2p_network_connect_loop display_current_conn ] active: 176.221.43.130:33323 with 0a69dc25fca6f9e85284deda98c92a51bc0f659adfa976e017ebf61620ab32d2c1 [outbound] node.cpp:1633
...
2015-08-21T09:07:57 p2p:message read_loop on_message ] handling message current_time_reply_message_type 6d669ff6f384895b812201eb0187f850f0bd8224 size 24 from peer 176.221.43.130:33323 node.cpp:1651
...
2015-08-21T09:08:00 p2p:terminate_inactive_connections_loop terminate_inactive_c ] Disconnecting peer 176.221.43.130:33323 because they didn't respond to my request for sync item ids after 0000000000000000000000000000000000000000 node.cpp:1317
2015-08-21T09:08:04 p2p:p2p_network_connect_loop display_current_conn ] handshaking: 176.221.43.130:33323 with 000000000000000000000000000000000000000000000000000000000000000000 [unknown] node.cpp:1640
2015-08-21T09:08:04 p2p:connect_to_task connect_to ] fatal: error connecting to peer 176.221.43.130:33323: 0 exception: unspecified
Cannot assign requested address
{"message":"Cannot assign requested address"}
asio asio.cpp:59 error_handler peer_connection.cpp:255
I have two nodes running, both have fixed p2p port, one is 62015 and the other is 60002.Try 176.221.43.130:33323 as seed... if not already doing so. Will have a look if I see reference to your ip somewhere..Mine is 114.92.254.159:62015 or other ports.
Yes it's alive nonstop. Which is your node? I see some coming and going in netstat, maybe your's is somehow blacklisted or so...
I see things like this in reference to your ip:Code: [Select]2015-08-21T08:46:31 p2p:message read_loop forward_firewall_che ] forwarding firewall check for node 114.92.254.159:63542 to peer 176.9.234.167:58896 node.cpp:3301
2015-08-21T08:46:32 p2p:message read_loop on_check_firewall_re ] Peer 176.9.234.167:58896 reports firewall check status unable_to_connect for 114.92.254.159:63542 node.cpp:3394
2015-08-21T08:46:32 p2p:message read_loop send_message ] peer_connection::send_message() enqueueing message of type 5015 for peer 114.92.254.159:63542 peer_connection.cpp:365
2015-08-21T08:46:32 p2p:message read_loop clear_old_inventory ] Expiring old inventory for peer 114.92.254.159:63542: removing 0 items advertised to peer (0 left), and 0 advertised to us (0 left) peer_connection.cpp:479
2015-08-21T08:46:32 p2p:advertise_inventory_loop clear_old_inventory ] Expiring old inventory for peer 114.92.254.159:63542: removing 0 items advertised to peer (0 left), and 0 advertised to us (0 left) peer_connection.cpp:479
2015-08-21T08:46:32 p2p:p2p_network_connect_loop display_current_conn ] active: 114.92.254.159:63542 with 9732257b913c2a994ae08ee742ec20c25848a8cb9e88c2aa963fc7ee8e2b40fef9 [inbound] node.cpp:1633
2015-08-21T08:46:32 p2p:p2p_network_connect_loop display_current_conn ] active: 114.92.254.159:63542 with 9732257b913c2a994ae08ee742ec20c25848a8cb9e88c2aa963fc7ee8e2b40fef9 [inbound] node.cpp:1633
2015-08-21T08:46:32 p2p:message read_loop display_current_conn ] active: 114.92.254.159:63542 with 9732257b913c2a994ae08ee742ec20c25848a8cb9e88c2aa963fc7ee8e2b40fef9 [inbound] node.cpp:1633
2015-08-21T08:46:33 p2p:message read_loop display_current_conn ] active: 114.92.254.159:63542 with 9732257b913c2a994ae08ee742ec20c25848a8cb9e88c2aa963fc7ee8e2b40fef9 [inbound] node.cpp:1633
2015-08-21T08:46:33 p2p:message read_loop display_current_conn ] active: 114.92.254.159:63542 with 9732257b913c2a994ae08ee742ec20c25848a8cb9e88c2aa963fc7ee8e2b40fef9 [inbound] node.cpp:1633
2015-08-21T08:46:33 p2p:message read_loop display_current_conn ] active: 114.92.254.159:63542 with 9732257b913c2a994ae08ee742ec20c25848a8cb9e88c2aa963fc7ee8e2b40fef9 [inbound] node.cpp:1633
2015-08-21T08:46:33 p2p:message read_loop display_current_conn ] active: 114.92.254.159:63542 with 9732257b913c2a994ae08ee742ec20c25848a8cb9e88c2aa963fc7ee8e2b40fef9 [inbound] node.cpp:1633
2015-08-21T08:46:33 p2p:message read_loop display_current_conn ] active: 114.92.254.159:63542 with 9732257b913c2a994ae08ee742ec20c25848a8cb9e88c2aa963fc7ee8e2b40fef9 [inbound] node.cpp:1633
2015-08-21T08:46:33 p2p:message read_loop clear_old_inventory ] Expiring old inventory for peer 114.92.254.159:63542: removing 0 items advertised to peer (0 left), and 0 advertised to us (0 left) peer_connection.cpp:479
2015-08-21T08:46:33 p2p:advertise_inventory_loop clear_old_inventory ] Expiring old inventory for peer 114.92.254.159:63542: removing 0 items advertised to peer (0 left), and 0 advertised to us (0 left) peer_connection.cpp:479
2015-08-21T08:46:33 p2p:message read_loop display_current_conn ] active: 114.92.254.159:63542 with 9732257b913c2a994ae08ee742ec20c25848a8cb9e88c2aa963fc7ee8e2b40fef9 [inbound] node.cpp:1633
2015-08-21T08:46:34 p2p:message read_loop clear_old_inventory ] Expiring old inventory for peer 114.92.254.159:63542: removing 0 items advertised to peer (0 left), and 0 advertised to us (0 left) peer_connection.cpp:479
2015-08-21T08:46:34 p2p:advertise_inventory_loop clear_old_inventory ] Expiring old inventory for peer 114.92.254.159:63542: removing 0 items advertised to peer (0 left), and 0 advertised to us (0 left) peer_connection.cpp:479
2015-08-21T08:46:36 p2p:message read_loop clear_old_inventory ] Expiring old inventory for peer 114.92.254.159:63542: removing 0 items advertised to peer (0 left), and 0 advertised to us (0 left) peer_connection.cpp:479
2015-08-21T08:46:36 p2p:advertise_inventory_loop clear_old_inventory ] Expiring old inventory for peer 114.92.254.159:63542: removing 0 items advertised to peer (0 left), and 0 advertised to us (0 left) peer_connection.cpp:479
2015-08-21T08:46:37 p2p:message read_loop clear_old_inventory ] Expiring old inventory for peer 114.92.254.159:63542: removing 0 items advertised to peer (0 left), and 0 advertised to us (0 left) peer_connection.cpp:479
2015-08-21T08:46:37 p2p:advertise_inventory_loop clear_old_inventory ] Expiring old inventory for peer 114.92.254.159:63542: removing 0 items advertised to peer (0 left), and 0 advertised to us (0 left) peer_connection.cpp:479
2015-08-21T08:46:38 p2p:message read_loop clear_old_inventory ] Expiring old inventory for peer 114.92.254.159:63542: removing 0 items advertised to peer (0 left), and 0 advertised to us (0 left) peer_connection.cpp:479
2015-08-21T08:46:38 p2p:advertise_inventory_loop clear_old_inventory ] Expiring old inventory for peer 114.92.254.159:63542: removing 0 items advertised to peer (0 left), and 0 advertised to us (0 left) peer_connection.cpp:479
2015-08-21T08:46:39 p2p:message read_loop clear_old_inventory ] Expiring old inventory for peer 114.92.254.159:63542: removing 0 items advertised to peer (0 left), and 0 advertised to us (0 left) peer_connection.cpp:479
2015-08-21T08:46:39 p2p:advertise_inventory_loop clear_old_inventory ] Expiring old inventory for peer 114.92.254.159:63542: removing 0 items advertised to peer (0 left), and 0 advertised to us (0 left) peer_connection.cpp:479
2015-08-21T08:46:40 p2p:message read_loop clear_old_inventory ] Expiring old inventory for peer 114.92.254.159:63542: removing 0 items advertised to peer (0 left), and 0 advertised to us (0 left) peer_connection.cpp:479
2015-08-21T08:46:40 p2p:advertise_inventory_loop clear_old_inventory ] Expiring old inventory for peer 114.92.254.159:63542: removing 0 items advertised to peer (0 left), and 0 advertised to us (0 left) peer_connection.cpp:479
2015-08-21T08:46:41 p2p:message read_loop on_connection_closed ] Remote peer 114.92.254.159:63542 closed their connection to us node.cpp:2724
2015-08-21T08:46:41 p2p:message read_loop schedule_peer_for_de ] scheduling peer for deletion: 114.92.254.159:63542 (this will not block)
Those line with firewall sounds like there is some potentail for being blacklisted. ON the other hand it clearly states you are not blocked.
Do you know it is on purpose that the port changes everytime one fires up the node? Have you already tried moving to fixed port?
Already set it as seed. Maybe it is just because of slow/unstable connection? The node is in China.Try 176.221.43.130:33323 as seed... if not already doing so. Will have a look if I see reference to your ip somewhere..Mine is 114.92.254.159:62015 or other ports.
Yes it's alive nonstop. Which is your node? I see some coming and going in netstat, maybe your's is somehow blacklisted or so...
Yes I think I have.My node crashed while generating a block.
No interesting info found in logs.
Will launch in gdb this time.
Did you have
-DCMAKE_BUILD_TYPE=Debug
while building? Mine is commenting on everything quite detailed about everything in the console....
cmake -DBOOST_ROOT="/app/boost_1_57_0.bin" -DCMAKE_BUILD_TYPE=Debug .
But the 'crash' last time looks like a clean exit. No info at all.
Mine is 114.92.254.159:62015 or other ports.
Yes it's alive nonstop. Which is your node? I see some coming and going in netstat, maybe your's is somehow blacklisted or so...
Hey Everyone,Hi, is the chain still alive? I'm stuck at about a hour ago and unable to connect right now.
Just got back "Online". 10 AM in the morning. Seems most of you are in the western hemisphere....
However, I also don't see original seed-node "104.236.51.238:1776" anymore (but I think it already left when i went to bed), my node on port 33323 doesn't seem to have major issues (at least not like the last times) and is now running ~10h nonstop. Here's some error grepped from my p2p log rigth now:Code: [Select]2015-08-21T08:22:23 p2p:send_queued_messages_task send_queued_messages ] Error sending message: {"code":0,"name":"exception","message":"unspecified","stack":[{"context":{"level":"error","file":"asio.cpp","line":37,"method":"operator()","hostname":"","thread_name":"asio","timestamp":"2015-08-21T08:22:23"},"format":"${message} ","data":{"message":"Bad file descriptor"}},{"context":{"level":"warn","file":"stcp_socket.cpp","line":153,"method":"writesome","hostname":"","thread_name":"p2p","timestamp":"2015-08-21T08:22:23"},"format":"","data":{"len":16}},{"context":{"level":"warn","file":"message_oriented_connection.cpp","line":264,"method":"send_message","hostname":"","thread_name":"p2p","timestamp":"2015-08-21T08:22:23"},"format":"unable to send message","data":{}}]}. Closing connection. peer_connection.cpp:303
and thenCode: [Select]2015-08-21T08:26:38 p2p:connect_to_task connect_to ] fatal: error connecting to peer 104.156.226.183:58884: 0 exception: unspecified
asio asio.cpp:59 error_handler peer_connection.cpp:255
2015-08-21T08:26:38 p2p:delayed_peer_deletion_task destroy ] Unexpected exception from peer_connection's accept_or_connect_task : {"code":0,"name":"exception","message":"unspecified","stack":[{"context":{"level":"error","file":"asio.cpp","line":59,"method":"error_handler","hostname":"","thread_name":"asio","timestamp":"2015-08-21T08:26:38"},"format":"${message} ","data":{"message":"Operation canceled"}}]} peer_connection.cpp:158
2015-08-21T08:26:42 p2p:connect_to_task connect_to ] fatal: error connecting to peer 104.236.255.53:36548: 0 exception: unspecified
asio asio.cpp:59 error_handler peer_connection.cpp:255
2015-08-21T08:26:42 p2p:delayed_peer_deletion_task destroy ] Unexpected exception from peer_connection's accept_or_connect_task : {"code":0,"name":"exception","message":"unspecified","stack":[{"context":{"level":"error","file":"asio.cpp","line":59,"method":"error_handler","hostname":"","thread_name":"asio","timestamp":"2015-08-21T08:26:42"},"format":"${message} ","data":{"message":"Connection refused"}}]} peer_connection.cpp:158
2015-08-21T08:26:42 p2p:connect_to_task connect_to ] fatal: error connecting to peer 104.236.51.238:1776: 0 exception: unspecified
asio asio.cpp:59 error_handler peer_connection.cpp:255
2015-08-21T08:26:42 p2p:delayed_peer_deletion_task destroy ] Unexpected exception from peer_connection's accept_or_connect_task : {"code":0,"name":"exception","message":"unspecified","stack":[{"context":{"level":"error","file":"asio.cpp","line":59,"method":"error_handler","hostname":"","thread_name":"asio","timestamp":"2015-08-21T08:26:42"},"format":"${message} ","data":{"message":"Connection refused"}}]} peer_connection.cpp:158
2015-08-21T08:26:42 p2p:connect_to_task connect_to ] fatal: error connecting to peer 104.131.205.149:44815: 0 exception: unspecified
asio asio.cpp:59 error_handler peer_connection.cpp:255
2015-08-21T08:26:42 p2p:delayed_peer_deletion_task destroy ] Unexpected exception from peer_connection's accept_or_connect_task : {"code":0,"name":"exception","message":"unspecified","stack":[{"context":{"level":"error","file":"asio.cpp","line":59,"method":"error_handler","hostname":"","thread_name":"asio","timestamp":"2015-08-21T08:26:42"},"format":"${message} ","data":{"message":"Connection refused"}}]} peer_connection.cpp:158So it seems it is complaining about some peer-nodes not online anymore...
To be more precise these nodes seem to online and offline. I can see them in netstat but they seem to have changed the port because they have been restarted without a fixed p2p-prt in the config? Maybe it's a good idea to fix that port or not? Is the networking part supoosed to sort this out or is it even for a purpose that the ports change everytime you restart your node?
./witness_node -d test5 --genesis-json ...
Did the chain ID change or am I on a fork?
test5 is a brand new data directory.Code: [Select]./witness_node -d test5 genesis-json /home/james/Downloads/aug-20-test-genesis.json --resync-blockchain --witness-id '"1.6.5250"' --private-key '["GPH7kNZtp64ZR1R4yC2w9bDLFNHkM8L2AFZSj1E", "5Jnwg...JC7Ur"]' -s "104.236.51.238:1776"
Try connecting to 176.221.43.130:33323, that's where all the blocks I get seem to be coming from, so I'm guessing that's my peer with lowest latency to the rest of the network.Yes that's one.
Sounds good. May I have your p2p IP/port so that I can re-sync?Code: [Select]info
{
"head_block_num": 21949,
"head_block_id": "000055bd60276952c37246d38eee3cf9780c71d6",
"head_block_age": "0 second old",
"next_maintenance_time": "18 seconds in the future",
"chain_id": "d011922587473757011118587f93afcc314fbaea094fc1055574721b27975083",
"active_witnesses": [
"1.6.0",
"1.6.1",
"1.6.2",
"1.6.3",
"1.6.4",
"1.6.5",
"1.6.6",
"1.6.7",
"1.6.8",
"1.6.9",
"1.6.10",
"1.6.11",
"1.6.12",
"1.6.13",
"1.6.14",
"1.6.15",
"1.6.16",
"1.6.17",
"1.6.18",
"1.6.19",
"1.6.20",
"1.6.21",
"1.6.22",
"1.6.23",
"1.6.24",
"1.6.25",
"1.6.26",
"1.6.27",
"1.6.28",
"1.6.29",
"1.6.30",
"1.6.31",
"1.6.32",
"1.6.33",
"1.6.34",
"1.6.35",
"1.6.36",
"1.6.37",
"1.6.38",
"1.6.39",
"1.6.40",
"1.6.41",
"1.6.42",
"1.6.43",
"1.6.44",
"1.6.45",
"1.6.46",
"1.6.47",
"1.6.48",
"1.6.49",
"1.6.50",
"1.6.51",
"1.6.52",
"1.6.53",
"1.6.54",
"1.6.55",
"1.6.56",
"1.6.57",
"1.6.58",
"1.6.59",
"1.6.60",
"1.6.61",
"1.6.62",
"1.6.63",
"1.6.64",
"1.6.65",
"1.6.66",
"1.6.67",
"1.6.68",
"1.6.69",
"1.6.70",
"1.6.71",
"1.6.72",
"1.6.73",
"1.6.74",
"1.6.75",
"1.6.76",
"1.6.77",
"1.6.78",
"1.6.79",
"1.6.80",
"1.6.81",
"1.6.82",
"1.6.83",
"1.6.84",
"1.6.85",
"1.6.86",
"1.6.87",
"1.6.88",
"1.6.89",
"1.6.90",
"1.6.91",
"1.6.92",
"1.6.93",
"1.6.94",
"1.6.95",
"1.6.96",
"1.6.1537",
"1.6.5246",
"1.6.5247",
"1.6.5248"
],
"active_committee_members": [
"1.5.0",
"1.5.1",
"1.5.2",
"1.5.3",
"1.5.4",
"1.5.5",
"1.5.6",
"1.5.7",
"1.5.8",
"1.5.9",
"1.5.10"
],
"entropy": "a04cd87714d3013b2ed9b14599ae2b2ad7b7e8e1"
}
I'm seeing a few brief pauses occasionally that are most likely missed blocks, but no sign of a serious failure yet. I'll dig into the logs and see what I can find. Is there a command I'm missing to get delegate participation, or recent missed blocks, or list forks?
$ telnet 104.236.51.238 1776
Trying 104.236.51.238...
telnet: Unable to connect to remote host: Connection refused