Recreated docker container to change hostname, got multiple unreachable nodes

I tried to reproduce the issue following your steps.

What i did:

  • created a docker container using the following command (see our docs)
docker run -d --name=netdata \
  -p 19998:19999 \
  -v netdatalib:/var/lib/netdata \
  -v netdatacache:/var/cache/netdata \
  -v /etc/passwd:/host/etc/passwd:ro \
  -v /etc/group:/host/etc/group:ro \
  -v /proc:/host/proc:ro \
  -v /sys:/host/sys:ro \
  -v /etc/os-release:/host/etc/os-release:ro \
  --restart unless-stopped \
  --cap-add SYS_PTRACE \
  --security-opt apparmor=unconfined \
  -e NETDATA_CLAIM_TOKEN=XXXX \
  -e NETDATA_CLAIM_URL="https://app.netdata.cloud" \
  -e NETDATA_CLAIM_ROOMS=XXXX \
  my_netdata

I got my instance up and claimed.

  • in order to change its hostname, i stoped and removed old instance and created a new one ( :exclamation: pay attention to -v in my prev command, it allows to persist database and claiming stuff on the host machine, so my new instance will be using it - Cloud see it as the same instance).

I executed same docker run ... command but added --hostname=my_docker_netdata0 \ (see our docs)

I get my instance up and i see it on the cloud. See the Node Name, it is changed to --hostname. No new unreachable nodes.


Claiming using env variables doesn’t work atm, i created a docker image from the PR that fixes it.

But changing hostname works, Cloud handles it.