Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docker: TheHive fails to connect to elasticsearch (NoNodeAvailableException) #854

Closed
rolinh opened this issue Jan 22, 2019 · 2 comments
Closed

Comments

@rolinh
Copy link

rolinh commented Jan 22, 2019

Request Type

Bug

Work Environment

Question Answer
OS version (server) N/A
OS version (client) N/A
TheHive version / git hash latest
Package Type Docker
Browser type & version N/A

Problem Description

Using the provided docker-compose.yml file, I don't manage to bring TheHive up.

Checking the logs, I see many different errors. In the thehive container, I get this error when trying to access TheHive:

thehive_1_f678a3a47396 | [info] o.e.ErrorHandler - GET /api/user/current returned 500
thehive_1_f678a3a47396 | org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: [{#transport#-1}{B8vl-6uRR4Kd-NG1e_RzAg}{172.19.0.2}{172.19.0.2:9300}]
thehive_1_f678a3a47396 |        at org.elasticsearch.client.transport.TransportClientNodesService.ensureNodesAreAvailable(TransportClientNodesService.java:347)
thehive_1_f678a3a47396 |        at org.elasticsearch.client.transport.TransportClientNodesService.execute(TransportClientNodesService.java:245)
thehive_1_f678a3a47396 |        at org.elasticsearch.client.transport.TransportProxyClient.execute(TransportProxyClient.java:59)
thehive_1_f678a3a47396 |        at org.elasticsearch.client.transport.TransportClient.doExecute(TransportClient.java:366)
thehive_1_f678a3a47396 |        at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:408)
thehive_1_f678a3a47396 |        at org.elasticsearch.action.ActionRequestBuilder.execute(ActionRequestBuilder.java:80)
thehive_1_f678a3a47396 |        at com.sksamuel.elastic4s.search.SearchImplicits$SearchDefinitionExecutable$.$anonfun$apply$1(SearchImplicits.scala:27)
thehive_1_f678a3a47396 |        at com.sksamuel.elastic4s.search.SearchImplicits$SearchDefinitionExecutable$.$anonfun$apply$1$adapted(SearchImplicits.scala:27)
thehive_1_f678a3a47396 |        at com.sksamuel.elastic4s.Executable.injectFutureAndMap(Executable.scala:21)
thehive_1_f678a3a47396 |        at com.sksamuel.elastic4s.Executable.injectFutureAndMap$(Executable.scala:19)

Then looking at the elasticsearch container logs, I can see that the process died:

Attaching to thehive-project_elasticsearch_1_5231271a1879
elasticsearch_1_5231271a1879 | [2019-01-22T15:08:14,442][INFO ][o.e.n.Node               ] [] initializing ...
elasticsearch_1_5231271a1879 | [2019-01-22T15:08:14,583][INFO ][o.e.e.NodeEnvironment    ] [h_hfa6T] using [1] data paths, mounts [[/ (overlay)]], net usable_space [25.7gb], net total_space [55.5gb], spins? [possibly], types [overlay]
elasticsearch_1_5231271a1879 | [2019-01-22T15:08:14,585][INFO ][o.e.e.NodeEnvironment    ] [h_hfa6T] heap size [1.9gb], compressed ordinary object pointers [true]
elasticsearch_1_5231271a1879 | [2019-01-22T15:08:14,592][INFO ][o.e.n.Node               ] node name [h_hfa6T] derived from node ID [h_hfa6T4T8SYsJar3P3R6g]; set [node.name] to override
elasticsearch_1_5231271a1879 | [2019-01-22T15:08:14,593][INFO ][o.e.n.Node               ] version[5.6.0], pid[1], build[781a835/2017-09-07T03:09:58.087Z], OS[Linux/4.4.0-141-generic/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/1.8.0_141/25.141-b16]
elasticsearch_1_5231271a1879 | [2019-01-22T15:08:14,594][INFO ][o.e.n.Node               ] JVM arguments [-Xms2g, -Xmx2g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -Djdk.io.permissionsUseCanonicalPath=true, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j.skipJansi=true, -XX:+HeapDumpOnOutOfMemoryError, -Des.cgroups.hierarchy.override=/, -Des.path.home=/usr/share/elasticsearch]
elasticsearch_1_5231271a1879 | [2019-01-22T15:08:17,475][INFO ][o.e.p.PluginsService     ] [h_hfa6T] loaded module [aggs-matrix-stats]
elasticsearch_1_5231271a1879 | [2019-01-22T15:08:17,475][INFO ][o.e.p.PluginsService     ] [h_hfa6T] loaded module [ingest-common]
elasticsearch_1_5231271a1879 | [2019-01-22T15:08:17,475][INFO ][o.e.p.PluginsService     ] [h_hfa6T] loaded module [lang-expression]
elasticsearch_1_5231271a1879 | [2019-01-22T15:08:17,475][INFO ][o.e.p.PluginsService     ] [h_hfa6T] loaded module [lang-groovy]
elasticsearch_1_5231271a1879 | [2019-01-22T15:08:17,475][INFO ][o.e.p.PluginsService     ] [h_hfa6T] loaded module [lang-mustache]
elasticsearch_1_5231271a1879 | [2019-01-22T15:08:17,475][INFO ][o.e.p.PluginsService     ] [h_hfa6T] loaded module [lang-painless]
elasticsearch_1_5231271a1879 | [2019-01-22T15:08:17,475][INFO ][o.e.p.PluginsService     ] [h_hfa6T] loaded module [parent-join]
elasticsearch_1_5231271a1879 | [2019-01-22T15:08:17,476][INFO ][o.e.p.PluginsService     ] [h_hfa6T] loaded module [percolator]
elasticsearch_1_5231271a1879 | [2019-01-22T15:08:17,476][INFO ][o.e.p.PluginsService     ] [h_hfa6T] loaded module [reindex]
elasticsearch_1_5231271a1879 | [2019-01-22T15:08:17,476][INFO ][o.e.p.PluginsService     ] [h_hfa6T] loaded module [transport-netty3]
elasticsearch_1_5231271a1879 | [2019-01-22T15:08:17,476][INFO ][o.e.p.PluginsService     ] [h_hfa6T] loaded module [transport-netty4]
elasticsearch_1_5231271a1879 | [2019-01-22T15:08:17,476][INFO ][o.e.p.PluginsService     ] [h_hfa6T] loaded plugin [ingest-geoip]
elasticsearch_1_5231271a1879 | [2019-01-22T15:08:17,476][INFO ][o.e.p.PluginsService     ] [h_hfa6T] loaded plugin [ingest-user-agent]
elasticsearch_1_5231271a1879 | [2019-01-22T15:08:17,476][INFO ][o.e.p.PluginsService     ] [h_hfa6T] loaded plugin [x-pack]
elasticsearch_1_5231271a1879 | [2019-01-22T15:08:18,663][WARN ][o.e.d.c.s.Settings       ] [script.inline] setting was deprecated in Elasticsearch and will be removed in a future release! See the breaking changes documentation for the next major version.
elasticsearch_1_5231271a1879 | [2019-01-22T15:08:22,273][INFO ][o.e.x.m.j.p.l.CppLogMessageHandler] [controller/55] [Main.cc@128] controller (64 bit): Version 5.6.0 (Build 93aea61f57f7d8) Copyright (c) 2017 Elasticsearch BV
elasticsearch_1_5231271a1879 | [2019-01-22T15:08:22,314][INFO ][o.e.d.DiscoveryModule    ] [h_hfa6T] using discovery type [zen]
elasticsearch_1_5231271a1879 | [2019-01-22T15:08:23,457][INFO ][o.e.n.Node               ] initialized
elasticsearch_1_5231271a1879 | [2019-01-22T15:08:23,458][INFO ][o.e.n.Node               ] [h_hfa6T] starting ...
elasticsearch_1_5231271a1879 | [2019-01-22T15:08:23,861][INFO ][o.e.t.TransportService   ] [h_hfa6T] publish_address {172.19.0.2:9300}, bound_addresses {0.0.0.0:9300}
elasticsearch_1_5231271a1879 | [2019-01-22T15:08:23,875][INFO ][o.e.b.BootstrapChecks    ] [h_hfa6T] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks
elasticsearch_1_5231271a1879 | ERROR: [1] bootstrap checks failed
elasticsearch_1_5231271a1879 | [1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
elasticsearch_1_5231271a1879 | [2019-01-22T15:08:23,897][INFO ][o.e.n.Node               ] [h_hfa6T] stopping ...
elasticsearch_1_5231271a1879 | [2019-01-22T15:08:24,210][INFO ][o.e.n.Node               ] [h_hfa6T] stopped
elasticsearch_1_5231271a1879 | [2019-01-22T15:08:24,210][INFO ][o.e.n.Node               ] [h_hfa6T] closing ...
elasticsearch_1_5231271a1879 | [2019-01-22T15:08:24,244][INFO ][o.e.n.Node               ] [h_hfa6T] closed
thehive-project_elasticsearch_1_5231271a1879 exited with code 78

Steps to Reproduce

  1. $ wget https://raw.githubusercontent.com/TheHive-Project/TheHive/master/docker/thehive/docker-compose.yml
  2. $ docker-compose up
  3. Check the logs (docker-compose logs -f thehive, docker-compose logs -f elasticsearch, docker-compose logs -f cortex)

Possible Solutions

I got it working by adding this to the environment section of the docker-compose.yml file for elasticsearch: - discovery.type=single-node.

@tonygumbrell
Copy link

You need to fix this error first as it bombs elastic out:

max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

Run sysctl -w vm.max.map_count=262144 or update sysctl.conf with the setting for persistence.

@rolinh
Copy link
Author

rolinh commented Jan 24, 2019

@tonygumbrell You are right in saying that this warning has to be addressed but it is not what brings elasticsearch down in this case. As mentioned, once elasticsearch is configured with discovery.type=single-node, it doesn't crash anymore (even if it still complains about vm.max_map_count):

Attaching to thehive-project_elasticsearch_1_ca7661a4bd29
elasticsearch_1_ca7661a4bd29 | [2019-01-24T12:46:25,868][INFO ][o.e.n.Node               ] [] initializing ...
elasticsearch_1_ca7661a4bd29 | [2019-01-24T12:46:26,054][INFO ][o.e.e.NodeEnvironment    ] [TG4GIiA] using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/mapper/srtempl--vg-root)]], net usable_space [24.5gb], net total_space [55.5gb], spins? [possibly], types [ext4]
elasticsearch_1_ca7661a4bd29 | [2019-01-24T12:46:26,057][INFO ][o.e.e.NodeEnvironment    ] [TG4GIiA] heap size [1.9gb], compressed ordinary object pointers [true]
elasticsearch_1_ca7661a4bd29 | [2019-01-24T12:46:26,058][INFO ][o.e.n.Node               ] node name [TG4GIiA] derived from node ID [TG4GIiA3TNOsLsveTONJ3A]; set [node.name] to override
elasticsearch_1_ca7661a4bd29 | [2019-01-24T12:46:26,061][INFO ][o.e.n.Node               ] version[5.6.14], pid[1], build[f310fe9/2018-12-05T21:20:16.416Z], OS[Linux/4.4.0-141-generic/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/1.8.0_191/25.191-b12]
elasticsearch_1_ca7661a4bd29 | [2019-01-24T12:46:26,061][INFO ][o.e.n.Node               ] JVM arguments [-Xms2g, -Xmx2g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -Djdk.io.permissionsUseCanonicalPath=true, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j.skipJansi=true, -XX:+HeapDumpOnOutOfMemoryError, -Des.path.home=/usr/share/elasticsearch]
elasticsearch_1_ca7661a4bd29 | [2019-01-24T12:46:27,479][INFO ][o.e.p.PluginsService     ] [TG4GIiA] loaded module [aggs-matrix-stats]
elasticsearch_1_ca7661a4bd29 | [2019-01-24T12:46:27,479][INFO ][o.e.p.PluginsService     ] [TG4GIiA] loaded module [ingest-common]
elasticsearch_1_ca7661a4bd29 | [2019-01-24T12:46:27,479][INFO ][o.e.p.PluginsService     ] [TG4GIiA] loaded module [lang-expression]
elasticsearch_1_ca7661a4bd29 | [2019-01-24T12:46:27,479][INFO ][o.e.p.PluginsService     ] [TG4GIiA] loaded module [lang-groovy]
elasticsearch_1_ca7661a4bd29 | [2019-01-24T12:46:27,479][INFO ][o.e.p.PluginsService     ] [TG4GIiA] loaded module [lang-mustache]
elasticsearch_1_ca7661a4bd29 | [2019-01-24T12:46:27,479][INFO ][o.e.p.PluginsService     ] [TG4GIiA] loaded module [lang-painless]
elasticsearch_1_ca7661a4bd29 | [2019-01-24T12:46:27,480][INFO ][o.e.p.PluginsService     ] [TG4GIiA] loaded module [parent-join]
elasticsearch_1_ca7661a4bd29 | [2019-01-24T12:46:27,480][INFO ][o.e.p.PluginsService     ] [TG4GIiA] loaded module [percolator]
elasticsearch_1_ca7661a4bd29 | [2019-01-24T12:46:27,480][INFO ][o.e.p.PluginsService     ] [TG4GIiA] loaded module [reindex]
elasticsearch_1_ca7661a4bd29 | [2019-01-24T12:46:27,480][INFO ][o.e.p.PluginsService     ] [TG4GIiA] loaded module [transport-netty3]
elasticsearch_1_ca7661a4bd29 | [2019-01-24T12:46:27,480][INFO ][o.e.p.PluginsService     ] [TG4GIiA] loaded module [transport-netty4]
elasticsearch_1_ca7661a4bd29 | [2019-01-24T12:46:27,480][INFO ][o.e.p.PluginsService     ] [TG4GIiA] no plugins loaded
elasticsearch_1_ca7661a4bd29 | [2019-01-24T12:46:27,885][WARN ][o.e.d.c.s.Settings       ] [script.inline] setting was deprecated in Elasticsearch and will be removed in a future release! See the breaking changes documentation for the next major version.
elasticsearch_1_ca7661a4bd29 | [2019-01-24T12:46:30,801][INFO ][o.e.d.DiscoveryModule    ] [TG4GIiA] using discovery type [single-node]
elasticsearch_1_ca7661a4bd29 | [2019-01-24T12:46:31,920][INFO ][o.e.n.Node               ] initialized
elasticsearch_1_ca7661a4bd29 | [2019-01-24T12:46:31,921][INFO ][o.e.n.Node               ] [TG4GIiA] starting ...
elasticsearch_1_ca7661a4bd29 | [2019-01-24T12:46:32,225][INFO ][o.e.t.TransportService   ] [TG4GIiA] publish_address {172.18.0.2:9300}, bound_addresses {0.0.0.0:9300}
elasticsearch_1_ca7661a4bd29 | [2019-01-24T12:46:32,262][WARN ][o.e.b.BootstrapChecks    ] [TG4GIiA] max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
elasticsearch_1_ca7661a4bd29 | [2019-01-24T12:46:32,307][INFO ][o.e.c.s.ClusterService   ] [TG4GIiA] new_master {TG4GIiA}{TG4GIiA3TNOsLsveTONJ3A}{RYc1Ifl_S0ix9lUnMAMFWQ}{172.18.0.2}{172.18.0.2:9300}, reason: single-node-start-initial-join[{TG4GIiA}{TG4GIiA3TNOsLsveTONJ3A}{RYc1Ifl_S0ix9lUnMAMFWQ}{172.18.0.2}{172.18.0.2:9300}]
elasticsearch_1_ca7661a4bd29 | [2019-01-24T12:46:32,381][INFO ][o.e.h.n.Netty4HttpServerTransport] [TG4GIiA] publish_address {172.18.0.2:9200}, bound_addresses {0.0.0.0:9200}
elasticsearch_1_ca7661a4bd29 | [2019-01-24T12:46:32,382][INFO ][o.e.n.Node               ] [TG4GIiA] started
elasticsearch_1_ca7661a4bd29 | [2019-01-24T12:46:32,494][INFO ][o.e.g.GatewayService     ] [TG4GIiA] recovered [0] indices into cluster_state

@To-om To-om added this to the 3.4.1 milestone Apr 7, 2020
@To-om To-om closed this as completed Apr 7, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants