Author Topic: is the witness thing running under win?  (Read 11454 times)

0 Members and 1 Guest are viewing this topic.

Offline Miner9r

  • Newbie
  • *
  • Posts: 10
    • View Profile
  • BitShares: miner9r
As of July-12 (v2.0.160702) this bug is still present.  I'm building from source under Linux.

My blockchain sync is jammed on block 2821722,  presumably because, as roadscape found above, the next block 2821723 has a huge text (the bible -- typical graffiti  blockchain vandalism).

This needs to be fixed *immediately*,  or provide a workaround  in top level documentation. 

This seems like a CRITICAL bug, throttling the entire BitShares ecosystem,  *No New Users* !!  Everything seems to be working for the old-timers, who already have the chain, while new users are throwing up their hands and silently leaving (possibly in droves, over the last several months, SINCE JANUARY!!)   It's incredibly frustrating to follow official documentation (which is already scattered, incomplete, outdated) and run into inexplicable jams like this.

I've been hammering on it for week++, trying to bring up BitShares as a new user, and only tunneled this far with relatively high expertise and perseverance.

In a sense, we don't actually have a working software or blockchain at the moment.  It's a closed system,  no new users!

Please fix it!!

===

As further comment on the bug, how did that 1MB text get in there?  The protocol should have limits built in for all these fields,  and the rest of the code needs to enforce and handle up to those known edges.  As the user base expands, you can expect plenty of accidental and malicious error injection like this.

Also, the release process should include a complete source build and blockchain sync from scratch,  on "typical" hardware.   If that's too onerous to do for every platform, at least cycle through one platform each release.

It would also help to update the official build/sync documentation as part of the release process.

Thanks to everyone for this excellent, advanced cryptocoin, exchange, ecosystem !


BTS id:  miner9r   
(computing expert, new bts user, please send me donations for my informative or helpful postings)

==================
STATS FROM cli_wallet:

about
{
  "client_version": "v2.0.160702",
  "graphene_revision": "3f7bcddd2546b1a054c8d46193db4efa19eab3e3",
  "graphene_revision_age": "10 days ago",
  "fc_revision": "31adee49d91275cc63aa3a47b3a9e3c826baccca",
  "fc_revision_age": "16 weeks ago",
  "compile_date": "compiled on Jul 12 2016 at 11:35:42",
  "boost_version": "1.58",
  "openssl_version": "OpenSSL 1.0.2h  3 May 2016",
  "build": "linux 64-bit"
}


get_dynamic_global_properties
{
  "id": "2.1.0",
  "head_block_number": 2821722,
  "head_block_id": "002b0e5a206413b5f17ee209705b51b3cb90fa4c",
  "time": "2016-01-20T09:28:42",
  "current_witness": "1.6.38",
  "next_maintenance_time": "2016-01-20T10:00:00",
  "last_budget_time": "2016-01-20T09:00:00",
  "witness_budget": 94350000,
  "accounts_registered_this_interval": 0,
  "recently_missed_count": 0,
  "current_aslot": 2839861,
  "recent_slots_filled": "340282366920938463463374607431768211455",
  "dynamic_flags": 0,
  "last_irreversible_block_num": 2821705
}

info
{
  "head_block_num": 2821722,
  "head_block_id": "002b0e5a206413b5f17ee209705b51b3cb90fa4c",
  "head_block_age": "25 weeks old",
  "next_maintenance_time": "25 weeks ago",
  "chain_id": "4018d7844c78f6a6c41c6a552b898022310fc5dec06da467ee7905a8dad512c8",
  "participation": "100.00000000000000000",
 ...
}

Offline abit

  • Committee member
  • Hero Member
  • *
  • Posts: 4664
    • View Profile
    • Abit's Hive Blog
  • BitShares: abit
  • GitHub: abitmore
I tested it and confirm the issue to do with low resources - ie low cpu power or bandwidth.
It's strange that it occurs on exactly same block: 2821722.

I replaced the blockchain folder on one node with a full backup, then it runs well. And the other node is catching up as well.

Trying to reproduce.

//Update:
Reproduced.
Most probably you're correct. Lots of these kind of entries in the log:
Code: [Select]
2016-04-21T12:03:29 p2p:message read_loop fetch_next_batch_of_ ] sync: sending a request for the next items after 002b0e5a206413b5f17ee209705b51b3cb90fa4c to peer 188.165.233.53:54674, (full request is ["002b0e48227e963b3fa7ebede652334b696e59d8","002b0e528f2ba15ab707c3646bb8b07a0007615f","002b0e571c3f3d9a9c7713ddacd69531c615ebe3","002b0e5966422ef475e20ef2c96acdaec7d51127","002b0e5a206413b5f17ee209705b51b3cb90fa4c"])                     node.cpp:2427

2016-04-21T12:03:30 p2p:terminate_inactive_connections_loop terminate_inactive_c ] Disconnecting peer 188.165.233.53:54674 because they didn't respond to my request for sync item ids after ["002b0e48227e963b3fa7ebede652334b696e59d8","002b0e528f2ba15ab707c3646bb8b07a0007615f","002b0e571cf3d9a9c7713ddacd69531c615ebe3","002b0e5966422ef475e20ef2c96acdaec7d51127","002b0e5a206413b5f17ee209705b51b3cb90fa4c"]                   node.cpp:1403


I think it can be optimized. In the code :
Code: [Select]
        // set the ignored request time out to 1 second.  When we request a block
        // or transaction from a peer, this timeout determines how long we wait for them
        // to reply before we give up and ask another peer for the item.
        // Ideally this should be significantly shorter than the block interval, because
        // we'd like to realize the block isn't coming and fetch it from a different
        // peer before the next block comes in.  At the current target of 3 second blocks,
        // 1 second seems reasonable.  When we get closer to our eventual target of 1 second
        // blocks, this will need to be re-evaluated (i.e., can we set the timeout to 500ms
        // and still handle normal network & processing delays without excessive disconnects)
        fc::microseconds active_ignored_request_timeout = fc::seconds(1);
The timeout applies even when syncing (batch fetching), but 1 second is perhaps too short for that?
Further thinking: why it happens magically on block 2821722 ?
@cube @arhag @pc @theoretical @bytemaster

The block after this one contains a huge operation.. 1MB of text in the asset description field. Very likely the source of the problem.
http://cryptofresh.com/b/2821723
Good catch! Very likely.
BitShares committee member: abit
BitShares witness: in.abit

Offline roadscape

I tested it and confirm the issue to do with low resources - ie low cpu power or bandwidth.
It's strange that it occurs on exactly same block: 2821722.

I replaced the blockchain folder on one node with a full backup, then it runs well. And the other node is catching up as well.

Trying to reproduce.

//Update:
Reproduced.
Most probably you're correct. Lots of these kind of entries in the log:
Code: [Select]
2016-04-21T12:03:29 p2p:message read_loop fetch_next_batch_of_ ] sync: sending a request for the next items after 002b0e5a206413b5f17ee209705b51b3cb90fa4c to peer 188.165.233.53:54674, (full request is ["002b0e48227e963b3fa7ebede652334b696e59d8","002b0e528f2ba15ab707c3646bb8b07a0007615f","002b0e571c3f3d9a9c7713ddacd69531c615ebe3","002b0e5966422ef475e20ef2c96acdaec7d51127","002b0e5a206413b5f17ee209705b51b3cb90fa4c"])                     node.cpp:2427

2016-04-21T12:03:30 p2p:terminate_inactive_connections_loop terminate_inactive_c ] Disconnecting peer 188.165.233.53:54674 because they didn't respond to my request for sync item ids after ["002b0e48227e963b3fa7ebede652334b696e59d8","002b0e528f2ba15ab707c3646bb8b07a0007615f","002b0e571cf3d9a9c7713ddacd69531c615ebe3","002b0e5966422ef475e20ef2c96acdaec7d51127","002b0e5a206413b5f17ee209705b51b3cb90fa4c"]                   node.cpp:1403


I think it can be optimized. In the code :
Code: [Select]
        // set the ignored request time out to 1 second.  When we request a block
        // or transaction from a peer, this timeout determines how long we wait for them
        // to reply before we give up and ask another peer for the item.
        // Ideally this should be significantly shorter than the block interval, because
        // we'd like to realize the block isn't coming and fetch it from a different
        // peer before the next block comes in.  At the current target of 3 second blocks,
        // 1 second seems reasonable.  When we get closer to our eventual target of 1 second
        // blocks, this will need to be re-evaluated (i.e., can we set the timeout to 500ms
        // and still handle normal network & processing delays without excessive disconnects)
        fc::microseconds active_ignored_request_timeout = fc::seconds(1);
The timeout applies even when syncing (batch fetching), but 1 second is perhaps too short for that?
Further thinking: why it happens magically on block 2821722 ?
@cube @arhag @pc @theoretical @bytemaster

The block after this one contains a huge operation.. 1MB of text in the asset description field. Very likely the source of the problem.
http://cryptofresh.com/b/2821723
http://cryptofresh.com  |  witness: roadscape

Offline abit

  • Committee member
  • Hero Member
  • *
  • Posts: 4664
    • View Profile
    • Abit's Hive Blog
  • BitShares: abit
  • GitHub: abitmore
I tested it and confirm the issue to do with low resources - ie low cpu power or bandwidth.
It's strange that it occurs on exactly same block: 2821722.

I replaced the blockchain folder on one node with a full backup, then it runs well. And the other node is catching up as well.

Trying to reproduce.

//Update:
Reproduced.
Most probably you're correct. Lots of these kind of entries in the log:
Code: [Select]
2016-04-21T12:03:29 p2p:message read_loop fetch_next_batch_of_ ] sync: sending a request for the next items after 002b0e5a206413b5f17ee209705b51b3cb90fa4c to peer 188.165.233.53:54674, (full request is ["002b0e48227e963b3fa7ebede652334b696e59d8","002b0e528f2ba15ab707c3646bb8b07a0007615f","002b0e571c3f3d9a9c7713ddacd69531c615ebe3","002b0e5966422ef475e20ef2c96acdaec7d51127","002b0e5a206413b5f17ee209705b51b3cb90fa4c"])                     node.cpp:2427

2016-04-21T12:03:30 p2p:terminate_inactive_connections_loop terminate_inactive_c ] Disconnecting peer 188.165.233.53:54674 because they didn't respond to my request for sync item ids after ["002b0e48227e963b3fa7ebede652334b696e59d8","002b0e528f2ba15ab707c3646bb8b07a0007615f","002b0e571cf3d9a9c7713ddacd69531c615ebe3","002b0e5966422ef475e20ef2c96acdaec7d51127","002b0e5a206413b5f17ee209705b51b3cb90fa4c"]                   node.cpp:1403


I think it can be optimized. In the code :
Code: [Select]
        // set the ignored request time out to 1 second.  When we request a block
        // or transaction from a peer, this timeout determines how long we wait for them
        // to reply before we give up and ask another peer for the item.
        // Ideally this should be significantly shorter than the block interval, because
        // we'd like to realize the block isn't coming and fetch it from a different
        // peer before the next block comes in.  At the current target of 3 second blocks,
        // 1 second seems reasonable.  When we get closer to our eventual target of 1 second
        // blocks, this will need to be re-evaluated (i.e., can we set the timeout to 500ms
        // and still handle normal network & processing delays without excessive disconnects)
        fc::microseconds active_ignored_request_timeout = fc::seconds(1);
The timeout applies even when syncing (batch fetching), but 1 second is perhaps too short for that?
Further thinking: why it happens magically on block 2821722 ?
@cube @arhag @pc @theoretical @bytemaster
« Last Edit: April 21, 2016, 12:38:47 pm by abit »
BitShares committee member: abit
BitShares witness: in.abit

Offline cube

  • Hero Member
  • *****
  • Posts: 1404
  • Bit by bit, we will get there!
    • View Profile
  • BitShares: bitcube
I tested it and confirm the issue to do with low resources - ie low cpu power or bandwidth.
ID: bitcube
bitcube is a dedicated witness and committe member. Please vote for bitcube.

Offline abit

  • Committee member
  • Hero Member
  • *
  • Posts: 4664
    • View Profile
    • Abit's Hive Blog
  • BitShares: abit
  • GitHub: abitmore
@cube I setup 2 new nodes today and both run into same issue.
Ubuntu 14.04 64 bit.

At most time CPU is 100%.

gdb stack dump here.

Code: [Select]
(gdb) thread apply all bt

Thread 5 (Thread 0x7ffff56dc700 (LWP 5918)):
#0  __memcmp_sse4_1 () at ../sysdeps/x86_64/multiarch/memcmp-sse4.S:1572
#1  0x0000000000fa192e in fc::operator==(fc::ripemd160 const&, fc::ripemd160 const&) ()
#2  0x00000000010a72c7 in graphene::net::detail::node_impl::have_already_received_sync_item(fc::ripemd160 const&) ()
#3  0x00000000010d31ef in graphene::net::detail::node_impl::fetch_sync_items_loop() ()
#4  0x00000000010d3c2c in fc::detail::void_functor_run<graphene::net::detail::node_impl::connect_to_p2p_network()::{lambda()#3}>::run(void*, fc::detail::void_functor_run<graphene::net::detail::node_impl::connect_to_p2p_network()::{lambda()#3}>) ()
#5  0x0000000000eff0e4 in fc::task_base::run_impl() ()
#6  0x0000000000efcc3f in fc::thread_d::process_tasks() ()
#7  0x0000000000efcea1 in fc::thread_d::start_process_tasks(long) ()
#8  0x00000000011d0b01 in make_fcontext ()
#9  0x0000000000000000 in ?? ()

Thread 4 (Thread 0x7ffff4edb700 (LWP 5919)):
#0  pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
#1  0x0000000000fc5580 in boost::asio::detail::task_io_service::run(boost::system::error_code&) ()
#2  0x0000000000fc80c9 in boost_asio_detail_posix_thread_function ()
#3  0x00007ffff7bc4182 in start_thread (arg=0x7ffff4edb700) at pthread_create.c:312
#4  0x00007ffff6aa247d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111

Thread 3 (Thread 0x7ffff5edd700 (LWP 5917)):
#0  0x00007ffff6aa2b13 in epoll_wait () at ../sysdeps/unix/syscall-template.S:81
#1  0x0000000000fbf867 in boost::asio::detail::epoll_reactor::run(bool, boost::asio::detail::op_queue<boost::asio::detail::task_io_service_operation>&) ()
#2  0x0000000000fc547f in boost::asio::detail::task_io_service::run(boost::system::error_code&) ()
#3  0x000000000108275c in fc::asio::default_io_service_scope::default_io_service_scope()::{lambda()#1}::operator()() const ()
#4  0x0000000001189dba in thread_proxy ()
#5  0x00007ffff7bc4182 in start_thread (arg=0x7ffff5edd700) at pthread_create.c:312
#6  0x00007ffff6aa247d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111

Thread 2 (Thread 0x7ffff66de700 (LWP 5916)):
---Type <return> to continue, or q <return> to quit---
#0  pthread_cond_timedwait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S:238
#1  0x0000000000efaf4b in boost::cv_status boost::condition_variable::wait_until<boost::chrono::steady_clock, boost::chrono::duration<long, boost::ratio<1l, 1000000000l> > >(boost::unique_lock<boost::mutex>&, boost::chrono::time_point<boost::chrono::steady_clock, boost::chrono::duration<long, boost::ratio<1l, 1000000000l> > > const&) ()
#2  0x0000000000efce4e in fc::thread_d::process_tasks() ()
#3  0x0000000000efcea1 in fc::thread_d::start_process_tasks(long) ()
#4  0x00000000011d0b01 in make_fcontext ()
#5  0x0000000000000000 in ?? ()

Thread 1 (Thread 0x7ffff7fe9780 (LWP 5912)):
#0  pthread_cond_timedwait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_timedwait.S:238
#1  0x0000000000efaf4b in boost::cv_status boost::condition_variable::wait_until<boost::chrono::steady_clock, boost::chrono::duration<long, boost::ratio<1l, 1000000000l> > >(boost::unique_lock<boost::mutex>&, boost::chrono::time_point<boost::chrono::steady_clock, boost::chrono::duration<long, boost::ratio<1l, 1000000000l> > > const&) ()
#2  0x0000000000efce4e in fc::thread_d::process_tasks() ()
#3  0x0000000000efcea1 in fc::thread_d::start_process_tasks(long) ()
#4  0x00000000011d0b01 in make_fcontext ()
#5  0x0000000000000000 in ?? ()

BitShares committee member: abit
BitShares witness: in.abit

Offline cube

  • Hero Member
  • *****
  • Posts: 1404
  • Bit by bit, we will get there!
    • View Profile
  • BitShares: bitcube
Did you remove object and blockchain data folders before running the witness node?

Edit:  The new logging function reduces log output to console significantly.  And so, what you are seeing at the console seems normal.  The way to check whether you are progressing is from the cli_wallet , eg:

Code: [Select]
{
  "head_block_num": 4033345,

Are the head block numble increasing?
« Last Edit: March 02, 2016, 10:18:18 pm by cube »
ID: bitcube
bitcube is a dedicated witness and committe member. Please vote for bitcube.

Offline tonyk

  • Hero Member
  • *****
  • Posts: 3308
    • View Profile
You can try this (latest version compile) - https://github.com/btscube/bitshares-2/releases/tag/2.0.160223

I got it to work under similar condition as yours.  If this worked for you, let me know.

same  :(

Lack of arbitrage is the problem, isn't it. And this 'should' solves it.

Offline cube

  • Hero Member
  • *****
  • Posts: 1404
  • Bit by bit, we will get there!
    • View Profile
  • BitShares: bitcube
You can try this (latest version compile) - https://github.com/btscube/bitshares-2/releases/tag/2.0.160223

I got it to work under similar condition as yours.  If this worked for you, let me know.
« Last Edit: March 01, 2016, 10:06:50 am by cube »
ID: bitcube
bitcube is a dedicated witness and committe member. Please vote for bitcube.

Offline tonyk

  • Hero Member
  • *****
  • Posts: 3308
    • View Profile
Lack of arbitrage is the problem, isn't it. And this 'should' solves it.

Offline cube

  • Hero Member
  • *****
  • Posts: 1404
  • Bit by bit, we will get there!
    • View Profile
  • BitShares: bitcube

well, mine is indeed stuck...the head block number stays the same and matches the one from the pic above "head_block_num": 2821722,

info
{
  "head_block_num": 2821722,
  "head_block_id": "002b0e5a206413b5f17ee209705b51b3cb90fa4c",
  "head_block_age": "13 days old",
  "next_maintenance_time": "13 days ago",
  "chain_id": "4018d7844c78f6a6c41c6a552b898022310fc5dec06da467ee7905a8dad512c8"
,
  "participation": "100.00000000000000000",
  "active_witnesses": [

Well we are coming close to an answer now.  You did manage to sync up to a certain point - "head_block_num": 2821722,

Now a few things to do:

1) What is your Internet bandwidth?
2) Is your system clock up to date/time?
3) Compress C:\Program Files\BitShares 2\bin\witness_node_data_dir\logs\p2p\p2p.log and share the compressed file out. PM me the link.  We would need to send the relevant info to the dev via github issue report.
ID: bitcube
bitcube is a dedicated witness and committe member. Please vote for bitcube.

Offline tonyk

  • Hero Member
  • *****
  • Posts: 3308
    • View Profile
I downloaded the latest official binary and tried the witness_node, as if I am a layman.  Surprisingly, I got the same result as you - ie the witness_node.exe seemed to be stuck.  The witness node output information seems to be changed.  It becomes quiet, possibly to be less spamy.

Here is what you can do:

1) Follow what xeldal advised:  make a shortcut with "C:\Program Files\BitShares 2\bin\witness_node.exe" --rpc-endpoint "127.0.0.1:8090" as the target.
Run this shortcut as Administrator

2) make a shortcut with "C:\Program Files\BitShares 2\bin\cli_wallet.exe" -H 127.0.0.1:8092 -s ws://127.0.0.1:8090 as the target.
Run this shortcut as Administrator

3) When the wallet runs, enter 'info' at the 'new' prompt.  You will get to see a number of stuff.  Look out for 'head_block_num'.  This number will be increasing even though the witness_node seems to be stuck with no new output from it.  But it is not stuck.  It is working quietly.


well, mine is indeed stuck...the head block number stays the same and matches the one from the pic above "head_block_num": 2821722,
I did not believe that running it from the shortcut will make a difference from doing the same from cmd but tried anyway.

Maybe yours is moving because it is indeed down loading blocks before that one # 2821722... or it is something with my whole setting and the witness_node.exe is fine..
Code: [Select]
info
{
  "head_block_num": 2821722,
  "head_block_id": "002b0e5a206413b5f17ee209705b51b3cb90fa4c",
  "head_block_age": "13 days old",
  "next_maintenance_time": "13 days ago",
  "chain_id": "4018d7844c78f6a6c41c6a552b898022310fc5dec06da467ee7905a8dad512c8"
,
  "participation": "100.00000000000000000",
  "active_witnesses": [
    "1.6.1",
    "1.6.3",
    "1.6.4",
    "1.6.5",
    "1.6.6",
    "1.6.7",
    "1.6.8",
    "1.6.9",
    "1.6.10",
    "1.6.11",
    "1.6.12",
    "1.6.13",
    "1.6.14",
    "1.6.15",
    "1.6.16",
    "1.6.17",
    "1.6.18",
    "1.6.19",
    "1.6.20",
    "1.6.21",
    "1.6.22",
    "1.6.23",
    "1.6.24",
    "1.6.25",
    "1.6.26",
    "1.6.27",
    "1.6.28",
    "1.6.29",
    "1.6.32",
    "1.6.33",
    "1.6.34"
  ],
  "active_committee_members": [
    "1.5.0",
    "1.5.2",
    "1.5.4",
    "1.5.5",
    "1.5.6",
    "1.5.7",
    "1.5.8",
    "1.5.9",
    "1.5.10",
    "1.5.11",
    "1.5.1"
  ]
}
locked >>> info
info
{
  "head_block_num": 2821722,
  "head_block_id": "002b0e5a206413b5f17ee209705b51b3cb90fa4c",
  "head_block_age": "13 days old",
  "next_maintenance_time": "13 days ago",
  "chain_id": "4018d7844c78f6a6c41c6a552b898022310fc5dec06da467ee7905a8dad512c8"
,
  "participation": "100.00000000000000000",
  "active_witnesses": [
    "1.6.1",
    "1.6.3",
    "1.6.4",
    "1.6.5",
    "1.6.6",
    "1.6.7",
    "1.6.8",
    "1.6.9",
    "1.6.10",
    "1.6.11",
    "1.6.12",
    "1.6.13",
    "1.6.14",
    "1.6.15",
    "1.6.16",
    "1.6.17",
    "1.6.18",
    "1.6.19",
    "1.6.20",
    "1.6.21",
    "1.6.22",
    "1.6.23",
    "1.6.24",
    "1.6.25",
    "1.6.26",
    "1.6.27",
    "1.6.28",
    "1.6.29",
    "1.6.32",
    "1.6.33",
    "1.6.34"
  ],
  "active_committee_members": [
    "1.5.0",
    "1.5.2",
    "1.5.4",
    "1.5.5",
    "1.5.6",
    "1.5.7",
    "1.5.8",
    "1.5.9",
    "1.5.10",
    "1.5.11",
    "1.5.1"
  ]
}
locked >>> info
info
{
  "head_block_num": 2821722,
  "head_block_id": "002b0e5a206413b5f17ee209705b51b3cb90fa4c",
  "head_block_age": "13 days old",
  "next_maintenance_time": "13 days ago",
  "chain_id": "4018d7844c78f6a6c41c6a552b898022310fc5dec06da467ee7905a8dad512c8"
,
  "participation": "100.00000000000000000",
  "active_witnesses": [
    "1.6.1",
    "1.6.3",
    "1.6.4",
    "1.6.5",
    "1.6.6",
    "1.6.7",
    "1.6.8",
    "1.6.9",
    "1.6.10",
    "1.6.11",
    "1.6.12",
    "1.6.13",
    "1.6.14",
    "1.6.15",
    "1.6.16",
    "1.6.17",
    "1.6.18",
    "1.6.19",
    "1.6.20",
    "1.6.21",
    "1.6.22",
    "1.6.23",
    "1.6.24",
    "1.6.25",
    "1.6.26",
    "1.6.27",
    "1.6.28",
    "1.6.29",
    "1.6.32",
    "1.6.33",
    "1.6.34"
  ],
  "active_committee_members": [
    "1.5.0",
    "1.5.2",
    "1.5.4",
    "1.5.5",
    "1.5.6",
    "1.5.7",
    "1.5.8",
    "1.5.9",
    "1.5.10",
    "1.5.11",
    "1.5.1"
  ]
}
locked >>> info
info
{
  "head_block_num": 2821722,
  "head_block_id": "002b0e5a206413b5f17ee209705b51b3cb90fa4c",
  "head_block_age": "13 days old",
  "next_maintenance_time": "13 days ago",
  "chain_id": "4018d7844c78f6a6c41c6a552b898022310fc5dec06da467ee7905a8dad512c8"
,
  "participation": "100.00000000000000000",
  "active_witnesses": [
    "1.6.1",
    "1.6.3",
    "1.6.4",
    "1.6.5",
    "1.6.6",
    "1.6.7",
    "1.6.8",
    "1.6.9",
    "1.6.10",
    "1.6.11",
    "1.6.12",
    "1.6.13",
    "1.6.14",
    "1.6.15",
    "1.6.16",
    "1.6.17",
    "1.6.18",
    "1.6.19",
    "1.6.20",
    "1.6.21",
    "1.6.22",
    "1.6.23",
    "1.6.24",
    "1.6.25",
    "1.6.26",
    "1.6.27",
    "1.6.28",
    "1.6.29",
    "1.6.32",
    "1.6.33",
    "1.6.34"
  ],
  "active_committee_members": [
    "1.5.0",
    "1.5.2",
    "1.5.4",
    "1.5.5",
    "1.5.6",
    "1.5.7",
    "1.5.8",
    "1.5.9",
    "1.5.10",
    "1.5.11",
    "1.5.1"
  ]
}
locked >>> info
info
{
  "head_block_num": 2821722,
  "head_block_id": "002b0e5a206413b5f17ee209705b51b3cb90fa4c",
  "head_block_age": "13 days old",
  "next_maintenance_time": "13 days ago",
  "chain_id": "4018d7844c78f6a6c41c6a552b898022310fc5dec06da467ee7905a8dad512c8"
,
  "participation": "100.00000000000000000",
  "active_witnesses": [
    "1.6.1",
    "1.6.3",
    "1.6.4",
    "1.6.5",
    "1.6.6",
    "1.6.7",
    "1.6.8",
    "1.6.9",
    "1.6.10",
    "1.6.11",
    "1.6.12",
    "1.6.13",
    "1.6.14",
    "1.6.15",
    "1.6.16",
    "1.6.17",
    "1.6.18",
    "1.6.19",
    "1.6.20",
    "1.6.21",
    "1.6.22",
    "1.6.23",
    "1.6.24",
    "1.6.25",
    "1.6.26",
    "1.6.27",
    "1.6.28",
    "1.6.29",
    "1.6.32",
    "1.6.33",
    "1.6.34"
  ],
  "active_committee_members": [
    "1.5.0",
    "1.5.2",
    "1.5.4",
    "1.5.5",
    "1.5.6",
    "1.5.7",
    "1.5.8",
    "1.5.9",
    "1.5.10",
    "1.5.11",
    "1.5.1"
  ]
}
locked >>>

............
unlocked >>> info
info
{
  "head_block_num": 2821722,
  "head_block_id": "002b0e5a206413b5f17ee209705b51b3cb90fa4c",
  "head_block_age": "13 days old",
  "next_maintenance_time": "13 days ago",
  "chain_id": "4018d7844c78f6a6c41c6a552b898022310fc5dec06da467ee7905a8dad512c8"
,
  "participation": "100.00000000000000000",
  "active_witnesses": [
    "1.6.1",
    "1.6.3",
    "1.6.4",
    "1.6.5",
    "1.6.6",
    "1.6.7",
    "1.6.8",
    "1.6.9",
    "1.6.10",
    "1.6.11",
    "1.6.12",
    "1.6.13",
    "1.6.14",
    "1.6.15",
    "1.6.16",
    "1.6.17",
    "1.6.18",
    "1.6.19",
    "1.6.20",
    "1.6.21",
    "1.6.22",
    "1.6.23",
    "1.6.24",
    "1.6.25",
    "1.6.26",
    "1.6.27",
    "1.6.28",
    "1.6.29",
    "1.6.32",
    "1.6.33",
    "1.6.34"
  ],
  "active_committee_members": [
    "1.5.0",
    "1.5.2",
    "1.5.4",
    "1.5.5",
    "1.5.6",
    "1.5.7",
    "1.5.8",
    "1.5.9",
    "1.5.10",
    "1.5.11",
    "1.5.1"
  ]
}
unlocked >>>
Lack of arbitrage is the problem, isn't it. And this 'should' solves it.

Offline cube

  • Hero Member
  • *****
  • Posts: 1404
  • Bit by bit, we will get there!
    • View Profile
  • BitShares: bitcube
I downloaded the latest official binary and tried the witness_node, as if I am a layman.  Surprisingly, I got the same result as you - ie the witness_node.exe seemed to be stuck.  The witness node output information seems to be changed.  It becomes quiet, possibly to be less spamy.

Here is what you can do:

1) Follow what xeldal advised:  make a shortcut with "C:\Program Files\BitShares 2\bin\witness_node.exe" --rpc-endpoint "127.0.0.1:8090" as the target.
Run this shortcut as Administrator

2) make a shortcut with "C:\Program Files\BitShares 2\bin\cli_wallet.exe" -H 127.0.0.1:8092 -s ws://127.0.0.1:8090 as the target.
Run this shortcut as Administrator

3) When the wallet runs, enter 'info' at the 'new' prompt.  You will get to see a number of stuff.  Look out for 'head_block_num'.  This number will be increasing even though the witness_node seems to be stuck with no new output from it.  But it is not stuck.  It is working quietly.
« Last Edit: February 02, 2016, 10:52:26 am by cube »
ID: bitcube
bitcube is a dedicated witness and committe member. Please vote for bitcube.

Offline tonyk

  • Hero Member
  • *****
  • Posts: 3308
    • View Profile
The screenshot shows your node was able to perform an initial network communication with seeds but died off soon after (0 network IO).  So the question is why? Did it die off because it was blocked or did it encounter some logic errors?  Was it a network timeout because of too congested pipe/too small bandwidth?

To know that:

1) Did you disable Firewall?   Disable it.  At least enable local port 1777

2) Do a 'netstat -a' to show the network socket state with seed nodes

3) Show the last 20 to 30 lines of p2p.log.  This gives an idea what the node was doing. Eg was it killing off some socket connections? And why?

" initial network communication with seeds but died off soon after (0 network IO). "
This is just a snapshot... it is very active on and off.

I can send you all the logs if you point me to where.
Lack of arbitrage is the problem, isn't it. And this 'should' solves it.

Offline cube

  • Hero Member
  • *****
  • Posts: 1404
  • Bit by bit, we will get there!
    • View Profile
  • BitShares: bitcube
The screenshot shows your node was able to perform an initial network communication with seeds but died off soon after (0 network IO).  So the question is why? Did it die off because it was blocked or did it encounter some logic errors?  Was it a network timeout because of too congested pipe/too small bandwidth?

To know that:

1) Did you disable Firewall?   Disable it.  At least enable local port 1777

2) Do a 'netstat -a' to show the network socket state with seed nodes

3) Show the last 20 to 30 lines of p2p.log.  This gives an idea what the node was doing. Eg was it killing off some socket connections? And why?
« Last Edit: February 02, 2016, 03:32:57 am by cube »
ID: bitcube
bitcube is a dedicated witness and committe member. Please vote for bitcube.