Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.


Messages - runestone

Pages: [1] 2
1
Technical Support / Change fee schedule using wallet cli
« on: May 05, 2020, 12:37:27 am »
Hi, I have a private testnet where I'd like to change the fee schedule. I've tried with the following:

Code: [Select]
# Create proposal
propose_fee_change init0 "2020-06-01T00:00:00" {"transfer":{"fee":1000000,"price_per_kbyte":10000}} true

# Approve proposal
approve_proposal init0 1.10.1 {"active_approvals_to_add" : ["init0"]} true

But it doesn't seem to work. What am I doing wrong?

2
Technical Support / Make bitsharesjs work with a testnet
« on: January 23, 2020, 04:16:17 pm »
How can I generate owner/active/memo keys using bitsharesjs lib that are pre-fixed with "TEST" (to be used on a private testnet)?

3
General Discussion / Asset flipping
« on: January 15, 2020, 04:39:36 pm »
Dear BitShares community,

I've noticed that some users are attempting to try their luck with "asset flipping" (similar to domain flipping) - or at least it seems like it. Take for example: https://wallet.bitshares.org/#/account/blokzinciri.org who registered multiple "core assets" such as https://wallet.bitshares.org/#/asset/WEALTH but never use it for anything.

In my opinion this can become a problem for BitShares because someone could simply acquire a lot of "core assets" and wait for someone to buy it for a high premium. This could potentially scare organizations away from creating new gateways if they cannot get the "core assets" they are looking for.

Obviously, its first-come-first-serve just like when you buy a traditional web domain such as .net .com etc. but I think BitShares could benefit from preventing excessive "core asset" registration by for example:
1) increase the costs of creating new "core assets"
2) add a yearly fee to hold a "core asset"
3) require a certain volume of transactions between the "core asset" and (BTS, bitUSD...)

These are just examples. Please let me know what you think...? Also I would like to know how I can make this an official proposal to the BitShares Community - who should I contact?

4
I've built a website that's utilizing the python-bitshares. The first request is always successful, example: /buy/bts?amount=10 generates an RPC request similar to the one below. Notice, there is only one item inside the signatures list:

Code: [Select]
{'method': 'call', 'params': ['network_broadcast', 'broadcast_transaction', [{'expiration': '2018-....', 'ref_block_num': 14306, 'ref_block_prefix': 8....., 'operations': [[2, {'fee': {'amount': 57, 'asset_id': '1.3.0'}, 'fee_paying_account': '1.2.8...', 'order': '1.7.1....', 'extensions': []}]], 'extensions': [], 'signatures': ['201...']}]], 'jsonrpc': '2.0', 'id': 12}
The second request (e.g. /cancel/order?id=$ID) fails with the following error:
Code: [Select]
  File "/home/www/venv/lib/python3.6/site-packages/bitshares/market.py", line 526, in cancel
    return self.bitshares.cancel(orderNumber, account=account)
  File "/home/www/venv/lib/python3.6/site-packages/bitshares/bitshares.py", line 1170, in cancel
    return self.finalizeOp(op, account["name"], "active", **kwargs)
  File "/home/www/venv/lib/python3.6/site-packages/bitshares/bitshares.py", line 261, in finalizeOp
    return self.txbuffer.broadcast()
  File "/home/www/venv/lib/python3.6/site-packages/bitshares/transactionbuilder.py", line 381, in broadcast
    raise e
  File "/home/www/venv/lib/python3.6/site-packages/bitshares/transactionbuilder.py", line 379, in broadcast
    ret, api="network_broadcast")
  File "/home/www/venv/lib/python3.6/site-packages/grapheneapi/graphenewsrpc.py", line 206, in method
    r = self.rpcexec(query)
  File "/home/www/venv/lib/python3.6/site-packages/bitsharesapi/bitsharesnoderpc.py", line 56, in rpcexec
    raise exceptions.UnhandledRPCError(msg)
bitsharesapi.exceptions.UnhandledRPCError: irrelevant signature included: Unnecessary signature(s) detected

The RPC call looks similar to this, this time notice two signatures inside the list:
Code: [Select]
{'method': 'call', 'params': ['network_broadcast', 'broadcast_transaction', [{'expiration': '2018-....', 'ref_block_num': 14306, 'ref_block_prefix': 8....., 'operations': [[2, {'fee': {'amount': 57, 'asset_id': '1.3.0'}, 'fee_paying_account': '1.2.8...', 'order': '1.7.1....', 'extensions': []}]], 'extensions': [], 'signatures': ['201...', '202...']}]], 'jsonrpc': '2.0', 'id': 12}
If I restart the web server and call the second request (/cancel/order?id=$ID) again, everything works. So basically, I can send make one transaction per webserver restart :-/

My theory is that the BitShares instance or the shared_bitshares_instance is caching something related to the signatures. I'm always using a new bitshares instace for each "transaction" (buy/sell/transfer..). I've tried calling bitshares.clear() and other things - but without luck.

One important thing to mention is that I'm creating the BitShares instance using the keys params, and do therefore not use the sqlite database. But this shouldn't affect things.

Any ideas how I can resolve this?

5
Technical Support / Public testnet is not working?
« on: July 20, 2018, 10:49:49 am »
http://docs.bitshares.org/testnet/index.html is pointing to http://testnet.bitshares.eu/ but its not working? Is there no longer a public testnet available?

6
Technical Support / Debugging witness nodes
« on: May 17, 2018, 10:49:26 am »
Hi, I have two witness nodes running, in-sync without any errors, yet I cannot make https://wallet.bitshares.org connect to them (adding them as a personal node):

ws://35.230.9.65:8090/ (2 vCPUs, 13 GB, using elasticsearch)

ws://104.199.112.108:8090/ (2 vCPUs, 13 GB, using RAM)

Please advice how I can debug these witness nodes.

7
Technical Support / Re: Unable to make elasticsearch plugin work
« on: May 10, 2018, 10:25:55 pm »
Quick update on the progress. I now have two witness nodes connecting to the same ES.
  • witness_node #1 was in sync and connected to ES
  • witness_node #2 was then started from scratch (Block 0), being connected to the same ES
No additional errors was seen on witness_nodes or ES after attaching witness_node #2.

The only error seen is this:
Code: [Select]
fullnode         | 2582979ms th_a       database_api.cpp:282          ~database_api_impl   ] freeing database api 2748370832
fullnode         | 2583114ms th_a       application.cpp:512           handle_block         ] Got block: #26867021 time: 2018-05-10T21:43:03 latency: 114 ms from: fox  irreversible: 26867003 (-18)
fullnode         | 2583119ms th_a       elasticsearch.cpp:66          SendBulk             ] error: Unknown error
fullnode         | 2583119ms th_a       es_objects.cpp:99             updateDatabase       ] Error sending data to database
fullnode         | 2583165ms th_a       database_api.cpp:263          database_api_impl    ] creating database api 2336441072
fullnode         | 2583166ms th_a       database_api.cpp:263          database_api_impl    ] creating database api 2622826080
The error was however already present before connecting witness_node #2 to ES. I guess this is normal behaviour?

8
Technical Support / Re: Unable to make elasticsearch plugin work
« on: May 09, 2018, 06:28:19 pm »
Sure, I'll try and let you know how it goes. It will properly take a few days - its a bitch to test on a full blockchain, while waiting up to 7 hours everything witness_node has to boot up using ES (hint: https://bitsharestalk.org/index.php?topic=26347.msg318083#msg318083 <-- witness_node process is only utilizing 1 CPU core, making it very slow..)

9
Technical Support / Re: Unable to make elasticsearch plugin work
« on: May 09, 2018, 02:21:10 pm »
Turns out, it was a stupid mistake (hard to spot) because there error messages is either not there or they do not repeat in the console output. The problem was double quoutes here:

Code: [Select]
      - BITSHARESD_ES_NODE_URL="http://elasticsearch:9200/"

Here is a working docker-compose.yml
Code: [Select]
version: '3'
services:
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:6.2.4
    container_name: elasticsearch
    environment:
      - ELASTIC_PASSWORD=secret
      - cluster.name=docker-cluster
      - discovery.type=single-node
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    networks:
      stack:
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - esdata:/usr/share/elasticsearch/data
    ports:
      - 9200:9200

  fullnode:
    image: bitshares/bitshares-core:latest
    container_name: fullnode
    environment:
      - BITSHARESD_PLUGINS=witness elasticsearch market_history
      - BITSHARESD_ES_NODE_URL=http://elasticsearch:9200/
      - BITSHARESD_RPC_ENDPOINT=0.0.0.0:8090
      - BITSHARESD_P2P_ENDPOINT=0.0.0.0:9090
      - BITSHARESD_WITNESS_ID="1.6.122"
      - BITSHARESD_PRIVATE_KEY=["BTS...","5..."]
    networks:
      stack:
    ports:
      - 9090:9090
      - 8090:8090
    volumes:
      - fullnode:/var/lib/bitshares
    depends_on:
      - elasticsearch

volumes:
  fullnode:
  esdata:

networks:
  stack:


Next question:
Is it possible to run multiple witness_nodes, that's sharing the same ElasticSearch or will it cause conflicts such as double inserts / race conditions or anything else like that? Basically, I'd like to host multiple witness_nodes across the globe to ensure high availability and low latency.

10
Technical Support / Unable to make elasticsearch plugin work
« on: May 06, 2018, 05:54:28 pm »
Hi, I'm trying to make a simple docker-compose.yml, that will setup bitshares and elasticsearch. However, I cannot make the elasticsearch plugin "activate".

docker-compose.yml
Code: [Select]
version: '3'
services:
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:6.2.4
    container_name: elasticsearch
    environment:
      - ELASTIC_PASSWORD=secret
      - cluster.name=docker-cluster
      - discovery.type=single-node
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    networks:
      stack:
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - esdata:/usr/share/elasticsearch/data
    ports:
      - 9200:9200

  fullnode:
    image: bitshares/bitshares-core:latest
    container_name: fullnode
    environment:
      - BITSHARESD_PLUGINS=witness elasticsearch market_history
      - BITSHARESD_ES_NODE_URL="http://elasticsearch:9200/"
      - BITSHARESD_RPC_ENDPOINT=0.0.0.0:8090
      - BITSHARESD_P2P_ENDPOINT=0.0.0.0:9090
      - BITSHARESD_WITNESS_ID="1.6.122"
      - BITSHARESD_PRIVATE_KEY=["BTS...","5..."]
    networks:
      stack:
    ports:
      - 9090:9090
      - 8090:8090
    volumes:
      - fullnode:/var/lib/bitshares
    depends_on:
      - elasticsearch

volumes:
  fullnode:
  esdata:

networks:
  stack:

Running the docker-compose.yml output this, and seems not to load the elasticsearch plugin and there is no error messages?
Code: [Select]
[email protected]:/tmp# docker-compose up
Starting elasticsearch ... done
Recreating fullnode    ... done
Attaching to elasticsearch, fullnode
elasticsearch    | Setting bootstrap.password already exists. Overwrite? [y/N]Did not understand answer 'kibana'
elasticsearch    | Setting bootstrap.password already exists. Overwrite? [y/N]Exiting without modifying keystore.
elasticsearch    | [2018-05-06T17:13:08,156][INFO ][o.e.n.Node               ] [] initializing ...
elasticsearch    | [2018-05-06T17:13:08,329][INFO ][o.e.e.NodeEnvironment    ] [6GEtENF] using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/sda1)]], net usable_space [75gb], net total_space [246gb], types [ext4]
elasticsearch    | [2018-05-06T17:13:08,330][INFO ][o.e.e.NodeEnvironment    ] [6GEtENF] heap size [495.3mb], compressed ordinary object pointers [true]
elasticsearch    | [2018-05-06T17:13:08,379][INFO ][o.e.n.Node               ] node name [6GEtENF] derived from node ID [6GEtENFeTleLQatLZF83kQ]; set [node.name] to override
elasticsearch    | [2018-05-06T17:13:08,380][INFO ][o.e.n.Node               ] version[6.2.4], pid[1], build[ccec39f/2018-04-12T20:37:28.497551Z], OS[Linux/4.4.0-121-generic/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/1.8.0_161/25.161-b14]
elasticsearch    | [2018-05-06T17:13:08,380][INFO ][o.e.n.Node               ] JVM arguments [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=/tmp/elasticsearch.By2wKKIi, -XX:+HeapDumpOnOutOfMemoryError, -XX:+PrintGCDetails, -XX:+PrintGCDateStamps, -XX:+PrintTenuringDistribution, -XX:+PrintGCApplicationStoppedTime, -Xloggc:logs/gc.log, -XX:+UseGCLogFileRotation, -XX:NumberOfGCLogFiles=32, -XX:GCLogFileSize=64m, -Des.cgroups.hierarchy.override=/, -Xms512m, -Xmx512m, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/usr/share/elasticsearch/config]
elasticsearch    | [2018-05-06T17:13:15,404][INFO ][o.e.p.PluginsService     ] [6GEtENF] loaded module [aggs-matrix-stats]
[..SNIPPET..]
elasticsearch    | [2018-05-06T17:13:15,420][INFO ][o.e.p.PluginsService     ] [6GEtENF] loaded plugin [x-pack-watcher]
elasticsearch    | [2018-05-06T17:13:27,918][INFO ][o.e.x.m.j.p.l.CppLogMessageHandler] [controller/145] [[email protected]] controller (64 bit): Version 6.2.4 (Build 524e7fe231abc1) Copyright (c) 2018 Elasticsearch BV
elasticsearch    | [2018-05-06T17:13:30,655][INFO ][o.e.d.DiscoveryModule    ] [6GEtENF] using discovery type [single-node]
elasticsearch    | [2018-05-06T17:13:32,905][INFO ][o.e.n.Node               ] initialized
elasticsearch    | [2018-05-06T17:13:32,914][INFO ][o.e.n.Node               ] [6GEtENF] starting ...
elasticsearch    | [2018-05-06T17:13:33,340][INFO ][o.e.t.TransportService   ] [6GEtENF] publish_address {172.18.0.2:9300}, bound_addresses {0.0.0.0:9300}
elasticsearch    | [2018-05-06T17:13:33,474][WARN ][o.e.b.BootstrapChecks    ] [6GEtENF] max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
elasticsearch    | [2018-05-06T17:13:33,558][INFO ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [6GEtENF] publish_address {172.18.0.2:9200}, bound_addresses {0.0.0.0:9200}
elasticsearch    | [2018-05-06T17:13:33,563][INFO ][o.e.n.Node               ] [6GEtENF] started
elasticsearch    | [2018-05-06T17:13:35,247][INFO ][o.e.l.LicenseService     ] [6GEtENF] license [ef462299-f20d-45f0-84c6-c61a92454ba2] mode [basic] - valid
elasticsearch    | [2018-05-06T17:13:35,277][INFO ][o.e.g.GatewayService     ] [6GEtENF] recovered [5] indices into cluster_state
elasticsearch    | [2018-05-06T17:13:36,667][INFO ][o.e.c.r.a.AllocationService] [6GEtENF] Cluster health status changed from [RED] to [GREEN] (reason: [shards started [[.monitoring-es-6-2018.05.04][0]] ...]).
fullnode         | 781920ms th_a       witness.cpp:87                plugin_initialize    ] witness plugin:  plugin_initialize() begin
fullnode         | 781920ms th_a       witness.cpp:97                plugin_initialize    ] Public Key: BTS8PhzeSSCoP83pgqXrMYCFdVrG4rZfsumjfTfQ52kNpyAHX5sKx
fullnode         | 781921ms th_a       witness.cpp:115               plugin_initialize    ] witness plugin:  plugin_initialize() end
fullnode         | 781921ms th_a       object_database.cpp:106       open                 ] Opening object database from /var/lib/bitshares/blockchain ...
fullnode         | 797097ms th_a       object_database.cpp:111       open                 ] Done opening object database.
fullnode         | 797098ms th_a       db_management.cpp:59          reindex              ] reindexing blockchain
fullnode         | 797098ms th_a       db_management.cpp:65          reindex              ] Replaying blocks, starting at 7714511...
fullnode         | ----
fullnode         | Will try again when it expires.
fullnode         |    68.3736%   7840000 of 11466413   
[..SNIPPET..]
fullnode         |    99.8569%   11450000 of 11466413   
fullnode         | 1352411ms th_a       db_management.cpp:78          reindex              ] Writing database to disk at block 11456413
fullnode         | 1353082ms th_a       db_management.cpp:80          reindex              ] Done
fullnode         | 1354778ms th_a       db_management.cpp:122         reindex              ] Done reindexing, elapsed time: 557.68077900000002955 sec
fullnode         | 1354781ms th_a       application.cpp:190           reset_p2p_node       ] Adding seed node 104.236.144.84:1777
[..SNIPPET..]
fullnode         | 1355911ms th_a       application.cpp:190           reset_p2p_node       ] Adding seed node 192.121.166.162:1776
fullnode         | 1355912ms th_a       application.cpp:205           reset_p2p_node       ] Configured p2p node to listen on 0.0.0.0:9090
fullnode         | 1355913ms th_a       application.cpp:282           reset_websocket_serv ] Configured websocket rpc to listen on 0.0.0.0:8090
fullnode         | 1355913ms th_a       witness.cpp:120               plugin_startup       ] witness plugin:  plugin_startup() begin
fullnode         | 1355913ms th_a       witness.cpp:125               plugin_startup       ] Launching block production for 1 witnesses.
fullnode         | 1355913ms th_a       witness.cpp:136               plugin_startup       ] witness plugin:  plugin_startup() end
fullnode         | 1355914ms th_a       main.cpp:266                  main                 ] Started BitShares node on a chain with 11466413 blocks.
fullnode         | 1355914ms th_a       main.cpp:267                  main                 ] Chain ID is 4018d7844c78f6a6c41c6a552b898022310fc5dec06da467ee7905a8dad512c8
fullnode         | 1356000ms th_a       witness.cpp:184               block_production_loo ] Not producing block because production is disabled until we receive a recent block (see: --enable-stale-production)
fullnode         | 1359107ms th_a       application.cpp:512           handle_block         ] Got block: #11470000 time: 2016-11-17T17:47:12 latency: 46222527107 ms from: xeldal  irreversible: 11469982 (-18)
fullnode         | 1366413ms th_a       application.cpp:512           handle_block         ] Got block: #11480000 time: 2016-11-18T02:09:45 latency: 46192381413 ms from: bue  irreversible: 11479982 (-18)
[..SNIPPET..]

I verified that the fullnode container has access to http://elasticsearch:9200/
Code: [Select]
# docker exec -it fullnode bash
[email protected]:/# curl http://elasticsearch:9200/
{
  "name" : "6GEtENF",
  "cluster_name" : "docker-cluster",
  "cluster_uuid" : "3cpwxF3aR-CIf4vvnV_obQ",
  "version" : {
    "number" : "6.2.4",
    "build_hash" : "ccec39f",
    "build_date" : "2018-04-12T20:37:28.497551Z",
    "build_snapshot" : false,
    "lucene_version" : "7.2.1",
    "minimum_wire_compatibility_version" : "5.6.0",
    "minimum_index_compatibility_version" : "5.0.0"
  },
  "tagline" : "You Know, for Search"
}

No data seems to be inserted into elasticsearch - why?
Code: [Select]
[email protected]:/# curl http://elasticsearch:9200/graphene-*/data/_count?prett
{
  "count" : 0,
  "_shards" : {
    "total" : 0,
    "successful" : 0,
    "skipped" : 0,
    "failed" : 0
  }
}

11
Stakeholder Proposals / witness_node feature requests
« on: April 23, 2018, 08:35:58 pm »
I'd like to suggest a few feature/improvements for the witness_node.

1) Add support for multiple CPU cores. The witness_node is only using 1 CPU core at the moment, which means that certain tasks like --replay-blockchain takes forever.

2) When a witness_node is starting up, it should print a preview of the configuration parameter/values it is using. This preview would be very useful while debugging/configuring the node for the first time. Currently the witness_node can be configured using command-line arguments and config.ini. If both are used, a CLI argument might override the config.ini parameter.

12
Hi,

Any idea what's causing this (or if it is a problem at all) ?

1011395ms th_a       database_api.cpp:263          database_api_impl    ] creating database api 1574593600
1011395ms th_a       websocket_api.cpp:152         on_message           ] e.to_detail_string(): 11 eof_exception: End Of File
stringstream
    {}
    th_a  sstream.cpp:66 readsome

    {"str":""}
    th_a  json.cpp:463 from_string


13
Openledger / Re: Can't trade on Openledger??
« on: April 20, 2018, 03:27:47 pm »
Hi, move BTC or another currency to your account, from another exchange such as coinbase/bitstamp etc. and then convert e.g. BTC to BTS ?

14
Technical Support / witness_node crash on currupt blockchain
« on: April 12, 2018, 09:06:43 pm »
I somehow managed to fuck up my blockchain, most likely because of an unclean termination of the witness_node process. Is there anyway I can recover from the state below? It takes almost 3 days to download the blockchain again, which I'd like to avoid.

Using the parameters --replay-blockchain --force-validate did not help.

Code: [Select]
# ./witness_node --data-dir="/bitshares-core/bitshares-core/programs/witness_node/witness_node_data_dir/" --replay-blockchain --force-validate

3309402ms th_a       witness.cpp:87                plugin_initialize    ] witness plugin:  plugin_initialize() begin
3309402ms th_a       witness.cpp:97                plugin_initialize    ] Public Key: BTS6MRyAjQq8ud7hVNYcfnVPJqcVpscN5So8BhtHuGYqET5GDW5CV
3309402ms th_a       witness.cpp:115               plugin_initialize    ] witness plugin:  plugin_initialize() end
3309402ms th_a       object_database.cpp:106       open                 ] Opening object database from /bitshares-core/bitshares-core/programs/witness_node/witness_node_data_dir/blockchain ...
3325700ms th_a       object_database.cpp:111       open                 ] Done opening object database.
3325701ms th_a       db_management.cpp:59          reindex              ] reindexing blockchain
3325701ms th_a       db_management.cpp:65          reindex              ] Replaying blocks, starting at 1...
3325701ms th_a       db_management.cpp:178         open                 ] 10 assert_exception: Assert Exception
(skip & skip_merkle_check) || next_block.transaction_merkle_root == next_block.calculate_merkle_root():
    {"next_block.transaction_merkle_root":"0000000021cff114000000004e02000000000000","calc":"0000000000000000000000000000000000000000","next_block":{"previous":"000000007dbcb61797750100b338ddb3f26fa023","timestamp":"1970-01-01T00:00:00"$
    th_a  db_block.cpp:493 _apply_block

    {"next_block.block_num()":1}
    th_a  db_block.cpp:545 _apply_block

    {"data_dir":"/bitshares-core/bitshares-core/programs/witness_node/witness_node_data_dir/blockchain"}
    th_a  db_management.cpp:123 reindex
3325702ms th_a       db_management.cpp:178         open                 ] data_dir: /bitshares-core/bitshares-core/programs/witness_node/witness_node_data_dir/blockchain
3325702ms th_a       application.cpp:403           startup              ] Caught exception 10 assert_exception: Assert Exception
(skip & skip_merkle_check) || next_block.transaction_merkle_root == next_block.calculate_merkle_root():
    {"next_block.transaction_merkle_root":"0000000021cff114000000004e02000000000000","calc":"0000000000000000000000000000000000000000","next_block":{"previous":"000000007dbcb61797750100b338ddb3f26fa023","timestamp":"1970-01-01T00:00:00"$
    th_a  db_block.cpp:493 _apply_block

    {"next_block.block_num()":1}
    th_a  db_block.cpp:545 _apply_block

    {"data_dir":"/bitshares-core/bitshares-core/programs/witness_node/witness_node_data_dir/blockchain"}
    th_a  db_management.cpp:123 reindex
rethrow
    {"data_dir":"/bitshares-core/bitshares-core/programs/witness_node/witness_node_data_dir/blockchain"}
    th_a  db_management.cpp:178 open in open(), you might want to force a replay
3325703ms th_a       application.cpp:454           startup              ] 10 assert_exception: Assert Exception
(skip & skip_merkle_check) || next_block.transaction_merkle_root == next_block.calculate_merkle_root():
    {"next_block.transaction_merkle_root":"0000000021cff114000000004e02000000000000","calc":"0000000000000000000000000000000000000000","next_block":{"previous":"000000007dbcb61797750100b338ddb3f26fa023","timestamp":"1970-01-01T00:00:00"$
    th_a  db_block.cpp:493 _apply_block

    {"next_block.block_num()":1}
    th_a  db_block.cpp:545 _apply_block

    {"data_dir":"/bitshares-core/bitshares-core/programs/witness_node/witness_node_data_dir/blockchain"}
    th_a  db_management.cpp:123 reindex
rethrow
    {"data_dir":"/bitshares-core/bitshares-core/programs/witness_node/witness_node_data_dir/blockchain"}
    th_a  db_management.cpp:178 open
3325703ms th_a       application.cpp:1042          startup              ] 10 assert_exception: Assert Exception
(skip & skip_merkle_check) || next_block.transaction_merkle_root == next_block.calculate_merkle_root():
    {"next_block.transaction_merkle_root":"0000000021cff114000000004e02000000000000","calc":"0000000000000000000000000000000000000000","next_block":{"previous":"000000007dbcb61797750100b338ddb3f26fa023","timestamp":"1970-01-01T00:00:00"$
    th_a  db_block.cpp:493 _apply_block

    {"next_block.block_num()":1}
    th_a  db_block.cpp:545 _apply_block

    {"data_dir":"/bitshares-core/bitshares-core/programs/witness_node/witness_node_data_dir/blockchain"}
    th_a  db_management.cpp:123 reindex
rethrow
    {"data_dir":"/bitshares-core/bitshares-core/programs/witness_node/witness_node_data_dir/blockchain"}
    th_a  db_management.cpp:178 open
rethrow
    {}
    th_a  application.cpp:454 startup
3325704ms th_a       main.cpp:282                  main                 ] Exiting with error:
10 assert_exception: Assert Exception
(skip & skip_merkle_check) || next_block.transaction_merkle_root == next_block.calculate_merkle_root():
    {"next_block.transaction_merkle_root":"0000000021cff114000000004e02000000000000","calc":"0000000000000000000000000000000000000000","next_block":{"previous":"000000007dbcb61797750100b338ddb3f26fa023","timestamp":"1970-01-01T00:00:00"$
    th_a  db_block.cpp:493 _apply_block

    {"next_block.block_num()":1}
    th_a  db_block.cpp:545 _apply_block

    {"data_dir":"/bitshares-core/bitshares-core/programs/witness_node/witness_node_data_dir/blockchain"}
    th_a  db_management.cpp:123 reindex
rethrow
    {"data_dir":"/bitshares-core/bitshares-core/programs/witness_node/witness_node_data_dir/blockchain"}
    th_a  db_management.cpp:178 open
rethrow
    {}
    th_a  application.cpp:454 startup
3325705ms th_a       db_management.cpp:194         close                ] Rewinding from 0 to 0

15
Technical Support / Running a cluster of full nodes
« on: April 05, 2018, 10:33:40 pm »
Assume you have multiple full nodes / witness nodes (e.g. 10 seperate servers, spread across the world). Can those 10 witness_node processes share the same --data-dir ("witness_node_data_dir") or will it cause some sort of file lock / race condition issue?

The reason for asking is that every time you setup a new witness node is takes many hours before the blockchain have been retrived and stored in the witness_node_data_dir.

If they different processes cannot work on the data data-dir is it then possible to copy/paste a fully loaded data-dir to another "empty" witness node?

Pages: [1] 2