Author Topic: Unable to make elasticsearch plugin work  (Read 3686 times)

0 Members and 1 Guest are viewing this topic.

Offline runestone

Quick update on the progress. I now have two witness nodes connecting to the same ES.
  • witness_node #1 was in sync and connected to ES
  • witness_node #2 was then started from scratch (Block 0), being connected to the same ES
No additional errors was seen on witness_nodes or ES after attaching witness_node #2.

The only error seen is this:
Code: [Select]
fullnode         | 2582979ms th_a       database_api.cpp:282          ~database_api_impl   ] freeing database api 2748370832
fullnode         | 2583114ms th_a       application.cpp:512           handle_block         ] Got block: #26867021 time: 2018-05-10T21:43:03 latency: 114 ms from: fox  irreversible: 26867003 (-18)
fullnode         | 2583119ms th_a       elasticsearch.cpp:66          SendBulk             ] error: Unknown error
fullnode         | 2583119ms th_a       es_objects.cpp:99             updateDatabase       ] Error sending data to database
fullnode         | 2583165ms th_a       database_api.cpp:263          database_api_impl    ] creating database api 2336441072
fullnode         | 2583166ms th_a       database_api.cpp:263          database_api_impl    ] creating database api 2622826080
The error was however already present before connecting witness_node #2 to ES. I guess this is normal behaviour?
Br, Rune
~ Please vote on blockbasis-witness
~ https://www.blockbasis.com

Offline runestone

Sure, I'll try and let you know how it goes. It will properly take a few days - its a bitch to test on a full blockchain, while waiting up to 7 hours everything witness_node has to boot up using ES (hint: https://bitsharestalk.org/index.php?topic=26347.msg318083#msg318083 <-- witness_node process is only utilizing 1 CPU core, making it very slow..)
« Last Edit: May 10, 2018, 04:54:50 am by runestone »
Br, Rune
~ Please vote on blockbasis-witness
~ https://www.blockbasis.com

Offline abit

  • Committee member
  • Hero Member
  • *
  • Posts: 4664
    • View Profile
    • Abit's Hive Blog
  • BitShares: abit
  • GitHub: abitmore

Next question:
Is it possible to run multiple witness_nodes, that's sharing the same ElasticSearch or will it cause conflicts such as double inserts / race conditions or anything else like that? Basically, I'd like to host multiple witness_nodes across the globe to ensure high availability and low latency.
I guess data in ES will be overwritten. Can you try?
BitShares committee member: abit
BitShares witness: in.abit

Offline runestone

Turns out, it was a stupid mistake (hard to spot) because there error messages is either not there or they do not repeat in the console output. The problem was double quoutes here:

Code: [Select]
      - BITSHARESD_ES_NODE_URL="http://elasticsearch:9200/"

Here is a working docker-compose.yml
Code: [Select]
version: '3'
services:
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:6.2.4
    container_name: elasticsearch
    environment:
      - ELASTIC_PASSWORD=secret
      - cluster.name=docker-cluster
      - discovery.type=single-node
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    networks:
      stack:
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - esdata:/usr/share/elasticsearch/data
    ports:
      - 9200:9200

  fullnode:
    image: bitshares/bitshares-core:latest
    container_name: fullnode
    environment:
      - BITSHARESD_PLUGINS=witness elasticsearch market_history
      - BITSHARESD_ES_NODE_URL=http://elasticsearch:9200/
      - BITSHARESD_RPC_ENDPOINT=0.0.0.0:8090
      - BITSHARESD_P2P_ENDPOINT=0.0.0.0:9090
      - BITSHARESD_WITNESS_ID="1.6.122"
      - BITSHARESD_PRIVATE_KEY=["BTS...","5..."]
    networks:
      stack:
    ports:
      - 9090:9090
      - 8090:8090
    volumes:
      - fullnode:/var/lib/bitshares
    depends_on:
      - elasticsearch

volumes:
  fullnode:
  esdata:

networks:
  stack:


Next question:
Is it possible to run multiple witness_nodes, that's sharing the same ElasticSearch or will it cause conflicts such as double inserts / race conditions or anything else like that? Basically, I'd like to host multiple witness_nodes across the globe to ensure high availability and low latency.
Br, Rune
~ Please vote on blockbasis-witness
~ https://www.blockbasis.com

Offline xeroc

  • Board Moderator
  • Hero Member
  • *****
  • Posts: 12922
  • ChainSquad GmbH
    • View Profile
    • ChainSquad GmbH
  • BitShares: xeroc
  • GitHub: xeroc
Can you look into the witness_node/bitshares container and see if it tells you that elasticsearch plugin is properly started?

Offline runestone

Hi, I'm trying to make a simple docker-compose.yml, that will setup bitshares and elasticsearch. However, I cannot make the elasticsearch plugin "activate".

docker-compose.yml
Code: [Select]
version: '3'
services:
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:6.2.4
    container_name: elasticsearch
    environment:
      - ELASTIC_PASSWORD=secret
      - cluster.name=docker-cluster
      - discovery.type=single-node
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    networks:
      stack:
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - esdata:/usr/share/elasticsearch/data
    ports:
      - 9200:9200

  fullnode:
    image: bitshares/bitshares-core:latest
    container_name: fullnode
    environment:
      - BITSHARESD_PLUGINS=witness elasticsearch market_history
      - BITSHARESD_ES_NODE_URL="http://elasticsearch:9200/"
      - BITSHARESD_RPC_ENDPOINT=0.0.0.0:8090
      - BITSHARESD_P2P_ENDPOINT=0.0.0.0:9090
      - BITSHARESD_WITNESS_ID="1.6.122"
      - BITSHARESD_PRIVATE_KEY=["BTS...","5..."]
    networks:
      stack:
    ports:
      - 9090:9090
      - 8090:8090
    volumes:
      - fullnode:/var/lib/bitshares
    depends_on:
      - elasticsearch

volumes:
  fullnode:
  esdata:

networks:
  stack:

Running the docker-compose.yml output this, and seems not to load the elasticsearch plugin and there is no error messages?
Code: [Select]
root@test:/tmp# docker-compose up
Starting elasticsearch ... done
Recreating fullnode    ... done
Attaching to elasticsearch, fullnode
elasticsearch    | Setting bootstrap.password already exists. Overwrite? [y/N]Did not understand answer 'kibana'
elasticsearch    | Setting bootstrap.password already exists. Overwrite? [y/N]Exiting without modifying keystore.
elasticsearch    | [2018-05-06T17:13:08,156][INFO ][o.e.n.Node               ] [] initializing ...
elasticsearch    | [2018-05-06T17:13:08,329][INFO ][o.e.e.NodeEnvironment    ] [6GEtENF] using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/sda1)]], net usable_space [75gb], net total_space [246gb], types [ext4]
elasticsearch    | [2018-05-06T17:13:08,330][INFO ][o.e.e.NodeEnvironment    ] [6GEtENF] heap size [495.3mb], compressed ordinary object pointers [true]
elasticsearch    | [2018-05-06T17:13:08,379][INFO ][o.e.n.Node               ] node name [6GEtENF] derived from node ID [6GEtENFeTleLQatLZF83kQ]; set [node.name] to override
elasticsearch    | [2018-05-06T17:13:08,380][INFO ][o.e.n.Node               ] version[6.2.4], pid[1], build[ccec39f/2018-04-12T20:37:28.497551Z], OS[Linux/4.4.0-121-generic/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/1.8.0_161/25.161-b14]
elasticsearch    | [2018-05-06T17:13:08,380][INFO ][o.e.n.Node               ] JVM arguments [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=/tmp/elasticsearch.By2wKKIi, -XX:+HeapDumpOnOutOfMemoryError, -XX:+PrintGCDetails, -XX:+PrintGCDateStamps, -XX:+PrintTenuringDistribution, -XX:+PrintGCApplicationStoppedTime, -Xloggc:logs/gc.log, -XX:+UseGCLogFileRotation, -XX:NumberOfGCLogFiles=32, -XX:GCLogFileSize=64m, -Des.cgroups.hierarchy.override=/, -Xms512m, -Xmx512m, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/usr/share/elasticsearch/config]
elasticsearch    | [2018-05-06T17:13:15,404][INFO ][o.e.p.PluginsService     ] [6GEtENF] loaded module [aggs-matrix-stats]
[..SNIPPET..]
elasticsearch    | [2018-05-06T17:13:15,420][INFO ][o.e.p.PluginsService     ] [6GEtENF] loaded plugin [x-pack-watcher]
elasticsearch    | [2018-05-06T17:13:27,918][INFO ][o.e.x.m.j.p.l.CppLogMessageHandler] [controller/145] [Main.cc@128] controller (64 bit): Version 6.2.4 (Build 524e7fe231abc1) Copyright (c) 2018 Elasticsearch BV
elasticsearch    | [2018-05-06T17:13:30,655][INFO ][o.e.d.DiscoveryModule    ] [6GEtENF] using discovery type [single-node]
elasticsearch    | [2018-05-06T17:13:32,905][INFO ][o.e.n.Node               ] initialized
elasticsearch    | [2018-05-06T17:13:32,914][INFO ][o.e.n.Node               ] [6GEtENF] starting ...
elasticsearch    | [2018-05-06T17:13:33,340][INFO ][o.e.t.TransportService   ] [6GEtENF] publish_address {172.18.0.2:9300}, bound_addresses {0.0.0.0:9300}
elasticsearch    | [2018-05-06T17:13:33,474][WARN ][o.e.b.BootstrapChecks    ] [6GEtENF] max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
elasticsearch    | [2018-05-06T17:13:33,558][INFO ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [6GEtENF] publish_address {172.18.0.2:9200}, bound_addresses {0.0.0.0:9200}
elasticsearch    | [2018-05-06T17:13:33,563][INFO ][o.e.n.Node               ] [6GEtENF] started
elasticsearch    | [2018-05-06T17:13:35,247][INFO ][o.e.l.LicenseService     ] [6GEtENF] license [ef462299-f20d-45f0-84c6-c61a92454ba2] mode [basic] - valid
elasticsearch    | [2018-05-06T17:13:35,277][INFO ][o.e.g.GatewayService     ] [6GEtENF] recovered [5] indices into cluster_state
elasticsearch    | [2018-05-06T17:13:36,667][INFO ][o.e.c.r.a.AllocationService] [6GEtENF] Cluster health status changed from [RED] to [GREEN] (reason: [shards started [[.monitoring-es-6-2018.05.04][0]] ...]).
fullnode         | 781920ms th_a       witness.cpp:87                plugin_initialize    ] witness plugin:  plugin_initialize() begin
fullnode         | 781920ms th_a       witness.cpp:97                plugin_initialize    ] Public Key: BTS8PhzeSSCoP83pgqXrMYCFdVrG4rZfsumjfTfQ52kNpyAHX5sKx
fullnode         | 781921ms th_a       witness.cpp:115               plugin_initialize    ] witness plugin:  plugin_initialize() end
fullnode         | 781921ms th_a       object_database.cpp:106       open                 ] Opening object database from /var/lib/bitshares/blockchain ...
fullnode         | 797097ms th_a       object_database.cpp:111       open                 ] Done opening object database.
fullnode         | 797098ms th_a       db_management.cpp:59          reindex              ] reindexing blockchain
fullnode         | 797098ms th_a       db_management.cpp:65          reindex              ] Replaying blocks, starting at 7714511...
fullnode         | ----
fullnode         | Will try again when it expires.
fullnode         |    68.3736%   7840000 of 11466413   
[..SNIPPET..]
fullnode         |    99.8569%   11450000 of 11466413   
fullnode         | 1352411ms th_a       db_management.cpp:78          reindex              ] Writing database to disk at block 11456413
fullnode         | 1353082ms th_a       db_management.cpp:80          reindex              ] Done
fullnode         | 1354778ms th_a       db_management.cpp:122         reindex              ] Done reindexing, elapsed time: 557.68077900000002955 sec
fullnode         | 1354781ms th_a       application.cpp:190           reset_p2p_node       ] Adding seed node 104.236.144.84:1777
[..SNIPPET..]
fullnode         | 1355911ms th_a       application.cpp:190           reset_p2p_node       ] Adding seed node 192.121.166.162:1776
fullnode         | 1355912ms th_a       application.cpp:205           reset_p2p_node       ] Configured p2p node to listen on 0.0.0.0:9090
fullnode         | 1355913ms th_a       application.cpp:282           reset_websocket_serv ] Configured websocket rpc to listen on 0.0.0.0:8090
fullnode         | 1355913ms th_a       witness.cpp:120               plugin_startup       ] witness plugin:  plugin_startup() begin
fullnode         | 1355913ms th_a       witness.cpp:125               plugin_startup       ] Launching block production for 1 witnesses.
fullnode         | 1355913ms th_a       witness.cpp:136               plugin_startup       ] witness plugin:  plugin_startup() end
fullnode         | 1355914ms th_a       main.cpp:266                  main                 ] Started BitShares node on a chain with 11466413 blocks.
fullnode         | 1355914ms th_a       main.cpp:267                  main                 ] Chain ID is 4018d7844c78f6a6c41c6a552b898022310fc5dec06da467ee7905a8dad512c8
fullnode         | 1356000ms th_a       witness.cpp:184               block_production_loo ] Not producing block because production is disabled until we receive a recent block (see: --enable-stale-production)
fullnode         | 1359107ms th_a       application.cpp:512           handle_block         ] Got block: #11470000 time: 2016-11-17T17:47:12 latency: 46222527107 ms from: xeldal  irreversible: 11469982 (-18)
fullnode         | 1366413ms th_a       application.cpp:512           handle_block         ] Got block: #11480000 time: 2016-11-18T02:09:45 latency: 46192381413 ms from: bue  irreversible: 11479982 (-18)
[..SNIPPET..]

I verified that the fullnode container has access to http://elasticsearch:9200/
Code: [Select]
# docker exec -it fullnode bash
root@cf5ecf3e301f:/# curl http://elasticsearch:9200/
{
  "name" : "6GEtENF",
  "cluster_name" : "docker-cluster",
  "cluster_uuid" : "3cpwxF3aR-CIf4vvnV_obQ",
  "version" : {
    "number" : "6.2.4",
    "build_hash" : "ccec39f",
    "build_date" : "2018-04-12T20:37:28.497551Z",
    "build_snapshot" : false,
    "lucene_version" : "7.2.1",
    "minimum_wire_compatibility_version" : "5.6.0",
    "minimum_index_compatibility_version" : "5.0.0"
  },
  "tagline" : "You Know, for Search"
}

No data seems to be inserted into elasticsearch - why?
Code: [Select]
root@cf5ecf3e301f:/# curl http://elasticsearch:9200/graphene-*/data/_count?prett
{
  "count" : 0,
  "_shards" : {
    "total" : 0,
    "successful" : 0,
    "skipped" : 0,
    "failed" : 0
  }
}
Br, Rune
~ Please vote on blockbasis-witness
~ https://www.blockbasis.com