BitShares Forum
Main => Technical Support => Topic started by: silent on July 30, 2017, 07:00:48 am
-
I compiled witness_node as the document, but it doesn't look like working well,
softwares : Debian9 x86, gcc 6,
boost.1.6..0(tried 1.5.7-1.5.9, none works, complain about some SSL functions),
bitshares-core is from github.
# witness_node -d /data/BTS_data --partial-operations true --track-account "\"1.2.22***\"" --rpc-endpoint
it needs many hours to sync data, but almost every time it will crash when the syncing data to a few days before current date,
after crashing or closed by Ctrl-C I start it again, but it won't sync data anymore, json rpc will return empty data(ie. empty bid and ask data).
2810445ms th_a object_database.cpp:94 open ] Opening object database from /data/BTS_data/blockchain ...
2822900ms th_a object_database.cpp:100 open ] Done opening object database.
2822923ms th_a application.cpp:131 reset_p2p_node ] Adding seed node 104.200.28.117:61705
..
...
2823265ms th_a witness.cpp:122 plugin_startup ] witness plugin: plugin_startup() begin
2823266ms th_a witness.cpp:137 plugin_startup ] No witnesses configured! Please add witness IDs and private keys to configuration.
2823266ms th_a witness.cpp:138 plugin_startup ] witness plugin: plugin_startup() end
2823266ms th_a main.cpp:179 main ] Started witness node on a chain with 0 blocks.
What I want to do is have a witness_node running, then I can get the ask/bid prices of some coins and use program to trade.
I have some questions here:
1. how to run witness_node smoothly even after restarting it?
2. do I have to set enable-stale-production = true and add some winness-id ?
I see some thread in forum saying to add some witness-id = "1.7.*
but in my config file, I can only add some "1.6.*", and I'v no idea what they are.
3. I heard the witness_node requires 8 ~ 24GB memory, closing it with Ctrl-C need to wait more than 10 minutes unitll it finish write data to disk,
but I only have 4GB memory, not SWAP, not SSD. but I never saw it use all of the memory, usually around 1GB,
though there is a time it did crashed with std::bad_alloc, but system still have lots of free memory, no OOM-killer message in system logs.
also closing it with a single Ctrl-C doesn't need to wait at all .
are these saying that I'm using wrong witness_node ?
4. is witness_node + cli_wallet enough to run a trading bot ? (with wintness_node default setting)
5. is there any public witness_node I can connect to ? then I can stop making my witness_node work, is it safe to run bot with public node ?
sorry, too many questions, but really need help, thanks!
-
1. how to run witness_node smoothly even after restarting it?
If the node didn't exit cleanly you should restart it with --replay-blockchain.
2. do I have to set enable-stale-production = true and add some winness-id ?
I see some thread in forum saying to add some witness-id = "1.7.*
but in my config file, I can only add some "1.6.*", and I'v no idea what they are.
No. Never use enable-stale-production. You don't need to set a witness id if you don't plan to run an actual witness.
3. I heard the witness_node requires 8 ~ 24GB memory, closing it with Ctrl-C need to wait more than 10 minutes unitll it finish write data to disk,
but I only have 4GB memory, not SWAP, not SSD. but I never saw it use all of the memory, usually around 1GB,
though there is a time it did crashed with std::bad_alloc, but system still have lots of free memory, no OOM-killer message in system logs.
also closing it with a single Ctrl-C doesn't need to wait at all .
are these saying that I'm using wrong witness_node ?
Although the witness_node with the partial-operations flag and a small list of tracked accounts uses less than 4 GB of RAM, a 4GB system with no swap space configured may be too small. Depends on what else is running.
4. is witness_node + cli_wallet enough to run a trading bot ? (with wintness_node default setting)
Depending on how your bot creates transactions, you may not need the cli_wallet.
5. is there any public witness_node I can connect to ? then I can stop making my witness_node work, is it safe to run bot with public node ?
Some people are running public API servers, but it would probably be a good idea to run your own. Again, this depends on your bot.
-
Full node runs now around 36GB RAM. A node with disabled history plugins should run around 2-3GB.
-
Full node runs now around 32GB RAM. A node with disabled history plugins should run around 2-3GB.
yesterday, I download bitshares source from github, and build it, run full node with command:
./programs/witness/witness_node --data-dir=full_node --rpc-endpoint="0.0.0.0:11011"
and wait many hours, my 32g ram 4 core cpu machine stuck, and crash, why bitshares's blockchain data so huge? if I want to disable history plugin, what should I do? thanks
-
Full node runs now around 32GB RAM. A node with disabled history plugins should run around 2-3GB.
yesterday, I download bitshares source from github, and build it, run full node with command:
./programs/witness/witness_node --data-dir=full_node --rpc-endpoint="0.0.0.0:11011"
and wait many hours, my 32g ram 4 core cpu machine stuck, and crash, why bitshares's blockchain data so huge? if I want to disable history plugin, what should I do? thanks
Comment out:
/programs/witness_node/main.cpp#L75-L76
75 auto history_plug = node->register_plugin<account_history::account_history_plugin>();
76 auto market_history_plug = node->register_plugin<market_history::market_history_plugin>();
-
Full node runs now around 32GB RAM. A node with disabled history plugins should run around 2-3GB.
yesterday, I download bitshares source from github, and build it, run full node with command:
./programs/witness/witness_node --data-dir=full_node --rpc-endpoint="0.0.0.0:11011"
and wait many hours, my 32g ram 4 core cpu machine stuck, and crash, why bitshares's blockchain data so huge? if I want to disable history plugin, what should I do? thanks
Comment out:
/programs/witness_node/main.cpp#L75-L76
75 auto history_plug = node->register_plugin<account_history::account_history_plugin>();
76 auto market_history_plug = node->register_plugin<market_history::market_history_plugin>();
@sahkan,
thank you very much. by the way, if history plugin enabled, should bts community be worried about the huge RAM requirement? the capacity of RAM a computer machine can provide is limited, at this rate, everyone could not run bitshares full node one day, Am I right?
-
Full node runs now around 32GB RAM. A node with disabled history plugins should run around 2-3GB.
yesterday, I download bitshares source from github, and build it, run full node with command:
./programs/witness/witness_node --data-dir=full_node --rpc-endpoint="0.0.0.0:11011"
and wait many hours, my 32g ram 4 core cpu machine stuck, and crash, why bitshares's blockchain data so huge? if I want to disable history plugin, what should I do? thanks
Comment out:
/programs/witness_node/main.cpp#L75-L76
75 auto history_plug = node->register_plugin<account_history::account_history_plugin>();
76 auto market_history_plug = node->register_plugin<market_history::market_history_plugin>();
@sahkan,
thank you very much. by the way, if history plugin enabled, should bts community be worried about the huge RAM requirement? the capacity of RAM a computer machine can provide is limited, at this rate, everyone could not run bitshares full node one day, Am I right?
@sahkan , @xeroc
by the way, if history plugin enabled, should bts community be worried about the huge RAM requirement? the capacity of RAM a computer machine can provide is limited, at this rate, everyone could not run bitshares full node one day, Am I right?
-
We are still quite ways away from the server limitations. But you are right, most desktops will not be able to run a full node.
-
We are still quite ways away from the server limitations. But you are right, most desktops will not be able to run a full node.
if a full node process can run on different server machine, one as a master process, the others as slave processes, each sub process could communication with each other, so the total RAM is equal to sum of these all different server which run a sub process. Is this a solution to resolve the single server's RAM limitations? could dev team implement this solution on bitshares?