Author Topic: October 5 Test Network  (Read 128512 times)

0 Members and 1 Guest are viewing this topic.

Offline spartako

  • Sr. Member
  • ****
  • Posts: 401
    • View Profile
Many delegates went in a big minor fork (~35% partecipations) and also spartako and spartako_bot

I removed the blockchain and resynced with the main chain using this seed node:
188.165.233.53:1777

This is the error then I found:
Code: [Select]
3080000 unlinkable_block_exception: unlinkable block
block does not link to known chain
    {}
    th_a  fork_database.cpp:78 _push_block

    {"new_block":{"previous":"00018aeac52ac578eead894ddaefb832bab051d0","timestamp":"2015-10-09T10:10:21","witness":"1.6.19","transaction_merkle_root":"0000000000000000000000000000000000000000","extensions":[],"witness_signature":"1f6a191cc3c9f8ea6d06f1e3546931a35873961b0b0defe5689a35de504c448ddc3818793a52abbe65da08faa70b5700df2c13088629dd9c1c43a13a2f3ac9fc8c","transactions":[]}}
    th_a  db_block.cpp:197 _push_block
901261ms th_a       fork_database.cpp:57          push_block           ] Pushing block to fork database that failed to link: 00018aec0d81d4f086eaffcecc9a028749483c49, 101100
901262ms th_a       fork_database.cpp:58          push_block           ] Head: 101115, 00018afb2ccf99a7986cf678b67613061ba56d0f
901262ms th_a       application.cpp:429           handle_block         ] Error when pushing block:
3080000 unlinkable_block_exception: unlinkable block
block does not link to known chain
    {}
« Last Edit: October 09, 2015, 10:43:11 am by spartako »
wallet_account_set_approval spartako

Offline spartako

  • Sr. Member
  • ****
  • Posts: 401
    • View Profile
Could someone send some large amount of CORE to dummy9?
I guess this account is used to send the initial 1000 CORE to newly registered users and now it stopped doing that as it has no CORE.
I just sent 100K to dummy9
wallet_account_set_approval spartako

jakub

  • Guest
Could someone send some large amount of CORE to dummy9?
I guess this account is used to send the initial 1000 CORE to newly registered users and now it stopped doing that as it has no CORE.

Offline CalabiYau



Try resync blocks without block production. After fully synced, restart witness node with block production option.

Tried that too, no blocks are coming my way.

same here

Use the seed node 188.165.233.53:1777

thx - this one is o.k.

Offline wackou

When I try to connect the cli_wallet to my local witness node it crashes with the following segfault:
Code: [Select]

Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7ffff4f86700 (LWP 7101)]
0x0000000000ed7d98 in websocketpp::processor::hybi13<fc::http::detail::asio_with_stub_log>::consume(unsigned char*, unsigned long, std::error_code&) ()
(gdb) backtrace
#0  0x0000000000ed7d98 in websocketpp::processor::hybi13<fc::http::detail::asio_with_stub_log>::consume(unsigned char*, unsigned long, std::error_code&) ()
#1  0x0000000000eb92a9 in websocketpp::connection<fc::http::detail::asio_with_stub_log>::handle_read_frame(std::error_code const&, unsigned long) ()
#2  0x0000000000e8af24 in websocketpp::transport::asio::connection<fc::http::detail::asio_with_stub_log::transport_config>::handle_async_read(boost::system::error_code const&, unsigned long) ()
#3  0x0000000000ea1825 in boost::asio::detail::completion_handler<boost::asio::detail::binder2<std::_Bind<std::_Mem_fn<void (websocketpp::transport::asio::connection<fc::http::detail::asio_with_stub_log::transport_config>::*)(boost::system::error_code const&, unsigned long)> (std::shared_ptr<websocketpp::transport::asio::connection<fc::http::detail::asio_with_stub_log::transport_config> >, std::_Placeholder<1>, std::_Placeholder<2>)>, boost::system::error_code, unsigned long> >::do_complete(boost::asio::detail::task_io_service*, boost::asio::detail::task_io_service_operation*, boost::system::error_code const&, unsigned long) ()
#4  0x0000000000ea1a96 in void boost::asio::detail::strand_service::dispatch<boost::asio::detail::binder2<std::_Bind<std::_Mem_fn<void (websocketpp::transport::asio::connection<fc::http::detail::asio_with_stub_log::transport_config>::*)(boost::system::error_code const&, unsigned long)> (std::shared_ptr<websocketpp::transport::asio::connection<fc::http::detail::asio_with_stub_log::transport_config> >, std::_Placeholder<1>, std::_Placeholder<2>)>, boost::system::error_code, unsigned long> >(boost::asio::detail::strand_service::strand_impl*&, boost::asio::detail::binder2<std::_Bind<std::_Mem_fn<void (websocketpp::transport::asio::connection<fc::http::detail::asio_with_stub_log::transport_config>::*)(boost::system::error_code const&, unsigned long)> (std::shared_ptr<websocketpp::transport::asio::connection<fc::http::detail::asio_with_stub_log::transport_config> >, std::_Placeholder<1>, std::_Placeholder<2>)>, boost::system::error_code, unsigned long>&) ()
#5  0x0000000000ea1bc1 in std::_Function_handler<void (boost::system::error_code const&, unsigned long), boost::asio::detail::wrapped_handler<boost::asio::io_service::strand, std::_Bind<std::_Mem_fn<void (websocketpp::transport::asio::connection<fc::http::detail::asio_with_stub_log::transport_config>::*)(boost::system::error_code const&, unsigned long)> (std::shared_ptr<websocketpp::transport::asio::connection<fc::http::detail::asio_with_stub_log::transport_config> >, std::_Placeholder<1>, std::_Placeholder<2>)>, boost::asio::detail::is_continuation_if_running> >::_M_invoke(std::_Any_data const&, boost::system::error_code const&, unsigned long&&) ()
#6  0x0000000000ea2e8d in boost::asio::detail::read_op<boost::asio::basic_stream_socket<boost::asio::ip::tcp, boost::asio::stream_socket_service<boost::asio::ip::tcp> >, boost::asio::mutable_buffers_1, boost::asio::detail::transfer_at_least_t, websocketpp::transport::asio::custom_alloc_handler<std::function<void (boost::system::error_code const&, unsigned long)> > >::operator()(boost::system::error_code const&, unsigned long, int) ()
#7  0x0000000000ea344d in boost::asio::detail::reactive_socket_recv_op<boost::asio::mutable_buffers_1, boost::asio::detail::read_op<boost::asio::basic_stream_socket<boost::asio::ip::tcp, boost::asio::stream_socket_service<boost::asio::ip::tcp> >, boost::asio::mutable_buffers_1, boost::asio::detail::transfer_at_least_t, websocketpp::transport::asio::custom_alloc_handler<std::function<void (boost::system::error_code const&, unsigned long)> > > >::do_complete(boost::asio::detail::task_io_service*, boost::asio::detail::task_io_service_operation*, boost::system::error_code const&, unsigned long) ()
#8  0x0000000000e2d475 in boost::asio::detail::epoll_reactor::descriptor_state::do_complete(boost::asio::detail::task_io_service*, boost::asio::detail::task_io_service_operation*, boost::system::error_code const&, unsigned long) ()
#9  0x0000000000ee72f6 in fc::asio::default_io_service_scope::default_io_service_scope()::{lambda()#1}::operator()() const ()
#10 0x000000000102fa35 in thread_proxy ()
#11 0x00007ffff774a4a4 in start_thread () from /usr/lib/libpthread.so.0
#12 0x00007ffff5e4912d in clone () from /usr/lib/libc.so.6

All it needs for me is run
Code: [Select]
  ./programs/cli_wallet/cli_wallet -s ws://127.0.0.1:8090  (which gets stuck after wdata.ws_user:  wdata.ws_password:)

Did you find a solution to that problem? It started happening to me last night, and now it won't connect at all (fails with the exact same error, same stack trace every single time).
Please vote for witness wackou! More info at http://digitalgaia.io

Offline mindphlux

  • Sr. Member
  • ****
  • Posts: 232
    • View Profile


Try resync blocks without block production. After fully synced, restart witness node with block production option.

Tried that too, no blocks are coming my way.

same here

Use the seed node 188.165.233.53:1777
Please consider voting for my witness mindphlux.witness and my committee user mindphlux. I will not vote for changes that affect witness pay.

Offline CalabiYau



Try resync blocks without block production. After fully synced, restart witness node with block production option.

Tried that too, no blocks are coming my way.

same here

Offline rnglab

  • Full Member
  • ***
  • Posts: 171
    • View Profile
  • BitShares: rnglab
I have my main node getting blocks from the seednode, and a backup node being rejected from the same seed. Both nodes updated to last master. This is the log from the one out of sync:

Code: [Select]
2015-10-09T07:09:26 p2p:dump_node_status_task     dump_node_status ] ----------------- PEER STATUS UPDATE --------------------     node.cpp:4644
2015-10-09T07:09:26 p2p:dump_node_status_task     dump_node_status ]  number of peers: 0 active, 1, 0 closing.  attempting to mainta
in 20 - 200 peers                       node.cpp:4647
2015-10-09T07:09:26 p2p:dump_node_status_task     dump_node_status ]   handshaking peer 104.236.51.238:2005 in state ours(disconnect
ed) theirs(disconnected)                        node.cpp:4662
2015-10-09T07:09:26 p2p:dump_node_status_task     dump_node_status ] --------- MEMORY USAGE ------------                        node
.cpp:4665
2015-10-09T07:09:26 p2p:dump_node_status_task     dump_node_status ] node._active_sync_requests size: 0                 node.cpp:466
6
2015-10-09T07:09:26 p2p:dump_node_status_task     dump_node_status ] node._received_sync_items size: 0                  node.cpp:466
7
2015-10-09T07:09:26 p2p:dump_node_status_task     dump_node_status ] node._new_received_sync_items size: 0                      node
.cpp:4668
2015-10-09T07:09:26 p2p:dump_node_status_task     dump_node_status ] node._items_to_fetch size: 0                       node.cpp:466
9
2015-10-09T07:09:26 p2p:dump_node_status_task     dump_node_status ] node._new_inventory size: 0                        node.cpp:467
0
2015-10-09T07:09:26 p2p:dump_node_status_task     dump_node_status ] node._message_cache size: 0                        node.cpp:467
1
2015-10-09T07:09:26 p2p:dump_node_status_task     dump_node_status ] --------- END MEMORY USAGE ------------                    node
.cpp:4681
2015-10-09T07:09:32 p2p:terminate_inactive_connections_loop terminate_inactive_c ] Forcibly disconnecting from handshaking peer 104.
236.51.238:2005 due to inactivity of at least 5 seconds                 node.cpp:1339
2015-10-09T07:09:32 p2p:terminate_inactive_connections_loop terminate_inactive_c ] Peer's negotiating status: connecting, bytes sent
: 0, bytes received: 0                  node.cpp:1343
2015-10-09T07:09:32   p2p:connect_to_task           connect_to ] fatal: error connecting to peer 104.236.51.238:2005: 0 exception: u
nspecified
Operation canceled
 {"message":"Operation canceled"}
asio  asio.cpp:38 operator()                        peer_connection.cpp:254
2015-10-09T07:09:32   p2p:connect_to_task display_current_conn ] Currently have 0 of [20/200] connections                       node.cp
p:1725
2015-10-09T07:09:32   p2p:connect_to_task display_current_conn ]    my id is 5ae3d9e6271d1901b807a0d0c7e09aaf18d2b21710cc3d6a7b1a30d4b2
560f3cd3                        node.cpp:1726
2015-10-09T07:09:32   p2p:connect_to_task trigger_p2p_network_ ] Triggering connect loop now                    node.cpp:982
2015-10-09T07:09:32   p2p:connect_to_task schedule_peer_for_de ] scheduling peer for deletion: 104.236.51.238:2005 (this will not block
)                       node.cpp:1634
2015-10-09T07:09:32   p2p:connect_to_task schedule_peer_for_de ] asyncing delayed_peer_deletion_task to delete 1 peers                node.cpp:1639
2015-10-09T07:09:32 p2p:delayed_peer_deletion_task delayed_peer_deletio ] beginning an iteration of delayed_peer_deletion_task with 1 i
n queue                 node.cpp:1598
2015-10-09T07:09:32 p2p:delayed_peer_deletion_task              destroy ] calling close_connection()                    peer_connection
.cpp:121
2015-10-09T07:09:32 p2p:delayed_peer_deletion_task              destroy ] close_connection completed normally                   peer_co
nnection.cpp:123
2015-10-09T07:09:32 p2p:delayed_peer_deletion_task              destroy ] canceling _send_queued_messages task                  peer_co
nnection.cpp:136
2015-10-09T07:09:32 p2p:delayed_peer_deletion_task              destroy ] cancel_and_wait completed normally                    peer_co
nnection.cpp:138
2015-10-09T07:09:32 p2p:delayed_peer_deletion_task              destroy ] canceling accept_or_connect_task                      peer_co
nnection.cpp:151
2015-10-09T07:09:32 p2p:delayed_peer_deletion_task              destroy ] Unexpected exception from peer_connection's accept_or_connect
_task : {"code":0,"name":"exception","message":"unspecified","stack":[{"context":{"level":"error","file":"asio.cpp","line":38,"method":
"operator()","hostname":"","thread_name":"asio","timestamp":"2015-10-09T07:09:32"},"format":"${message} ","data":{"message":"Operation
canceled"}}]}                   peer_connection.cpp:157
2015-10-09T07:09:32 p2p:delayed_peer_deletion_task   destroy_connection ] in destroy_connection() for                   message_oriente
d_connection.cpp:280
2015-10-09T07:09:32 p2p:delayed_peer_deletion_task   destroy_connection ] in destroy_connection() for                   message_oriente
d_connection.cpp:280
2015-10-09T07:09:32 p2p:delayed_peer_deletion_task delayed_peer_deletio ] leaving delayed_peer_deletion_task                    node.cp


Offline spartako

  • Sr. Member
  • ****
  • Posts: 401
    • View Profile
It didn't work for me. I had to choose another peer node to sync from, I guess the seed node is not on the latest master yet maybe.

mindphlux.witness is back up & on the latest master.

I helped mindphlux searching a new seed node in my logs, I tried the network_get_connected_peers command but give me a permission error probably because it is a restricted API (I will try to specify apiaccess file in config.ini)
wallet_account_set_approval spartako

Offline mindphlux

  • Sr. Member
  • ****
  • Posts: 232
    • View Profile
It didn't work for me. I had to choose another peer node to sync from, I guess the seed node is not on the latest master yet maybe.

mindphlux.witness is back up & on the latest master.
Please consider voting for my witness mindphlux.witness and my committee user mindphlux. I will not vote for changes that affect witness pay.

Offline jtme

After upgrading, I'm unable to download blocks - I deleted all data dirs etc to be sure, used --resync-blockchain etc too.

It's stuck at
482000ms th_a       witness.cpp:179               block_production_loo ] Not producing block because production is disabled until we receive a recent block (see: --enable-stale-production)

Is the seednode not sending blocks? Is there another seed node I can try?

seednode is fine, just synced from it quite quickly. But I'm not yet on latest master.

Offline mindphlux

  • Sr. Member
  • ****
  • Posts: 232
    • View Profile


Try resync blocks without block production. After fully synced, restart witness node with block production option.

Tried that too, no blocks are coming my way.
Please consider voting for my witness mindphlux.witness and my committee user mindphlux. I will not vote for changes that affect witness pay.

Offline ElMato

  • Sr. Member
  • ****
  • Posts: 288
    • View Profile
elmato updated to latest master

Offline clayop

  • Hero Member
  • *****
  • Posts: 2033
    • View Profile
    • Bitshares Korea
  • BitShares: clayop
After upgrading, I'm unable to download blocks - I deleted all data dirs etc to be sure, used --resync-blockchain etc too.

It's stuck at
482000ms th_a       witness.cpp:179               block_production_loo ] Not producing block because production is disabled until we receive a recent block (see: --enable-stale-production)

Is the seednode not sending blocks? Is there another seed node I can try?

Try resync blocks without block production. After fully synced, restart witness node with block production option.
Bitshares Korea - http://www.bitshares.kr
Vote for me and see Korean Bitshares community grows
delegate-clayop

Offline mindphlux

  • Sr. Member
  • ****
  • Posts: 232
    • View Profile
After upgrading, I'm unable to download blocks - I deleted all data dirs etc to be sure, used --resync-blockchain etc too.

It's stuck at
482000ms th_a       witness.cpp:179               block_production_loo ] Not producing block because production is disabled until we receive a recent block (see: --enable-stale-production)

Is the seednode not sending blocks? Is there another seed node I can try?
Please consider voting for my witness mindphlux.witness and my committee user mindphlux. I will not vote for changes that affect witness pay.