Duplicate Node on Cloud

Problem/Question

I installed netdata agent on a windows server, but from cloud end it shows a correct live node + one duplicate unseen copy with no data.

Relevant docs you followed/actions you took to solve the issue

No

Environment/Browser/Agent’s version etc


What I expected to happen

After deploy it should show one node only.

Node remove is disable as well. Is there any way i could fix it?

Hi @walthack,

After investigation I found that node duplication it’s indeed a bug we recently introduce on Cloud, and we will address it asap.

Nevertheless, you should be able to remove the duplicate node without any issue.
You can remove by accessing the Nodes section at your space settings.

I hope this helps you out.

Is this still an active bug? I’m seeing this and its getting out of control, duplicates of duplicates (I deleted it a week ago, its back now)

Hi @k4ze,
The bug mentioned above is already addressed.
Could you better explain your setup or the steps you are doing to end up with duplicated nodes on the Cloud?

Hi,

So I have about 6-7 nodes, mostly Ubuntu Linux where I’m running netdata agents. The symptom is exactly as the original post, the nodes are showing as offline and duplicates. I deleted them last week and they are back again this week.

Example of live node: (this node has been configured for about 6-7 months, duplicated only appearing last 30ish days)

Example of same node but showing as offline/duplicates:

status of the agent:

sudo systemctl status netdata
● netdata.service - Netdata, X-Ray Vision for your infrastructure!
Loaded: loaded (/lib/systemd/system/netdata.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2025-04-01 06:48:18 ACDT; 20h ago
Process: 3955396 ExecStartPre=/bin/mkdir -p /opt/netdata/var/cache/netdata (code=exited, status=0/SUCCESS)
Process: 3955430 ExecStartPre=/bin/chown -R netdata /opt/netdata/var/cache/netdata (code=exited, status=0/SUCCESS)
Main PID: 3955438 (netdata)
Tasks: 86 (limit: 14219)
Memory: 287.0M
CPU: 52min 31.242s
CGroup: /system.slice/netdata.service
├─ 832025 bash /opt/netdata/usr/libexec/netdata/plugins.d/tc-qos-helper.sh 1
├─3955438 /opt/netdata/bin/srv/netdata -P /run/netdata/netdata.pid -D
├─3955443 "spawn-plugins " " " " " " "
├─3955876 /opt/netdata/usr/libexec/netdata/plugins.d/apps.plugin 1
├─3955877 /opt/netdata/usr/libexec/netdata/plugins.d/go.d.plugin 1
├─3955879 /opt/netdata/usr/libexec/netdata/plugins.d/ebpf.plugin 1
├─3955884 /opt/netdata/usr/libexec/netdata/plugins.d/nfacct.plugin 1
├─3955892 /opt/netdata/usr/libexec/netdata/plugins.d/debugfs.plugin 1
├─3955900 /opt/netdata/usr/libexec/netdata/plugins.d/network-viewer.plugin 1
└─3955905 "spawn-setns " " "

Apr 02 03:02:00 docker-hub ebpf.plugin[3955879]: heartbeat clock: woke up 1275742 microseconds later than expected (can be due to system load or>
Apr 02 03:02:00 docker-hub ebpf.plugin[3955879]: heartbeat clock: woke up 1202197 microseconds later than expected (can be due to system load or>
Apr 02 03:02:00 docker-hub ebpf.plugin[3955879]: heartbeat clock: woke up 1164624 microseconds later than expected (can be due to system load or>
Apr 02 03:02:00 docker-hub ebpf.plugin[3955879]: heartbeat clock: woke up 1152732 microseconds later than expected (can be due to system load or>
Apr 02 03:02:00 docker-hub ebpf.plugin[3955879]: heartbeat clock: woke up 1134120 microseconds later than expected (can be due to system load or>
Apr 02 03:02:00 docker-hub debugfs.plugin[3955892]: heartbeat clock: woke up 1105965 microseconds later than expected (can be due to system load>
Apr 02 03:02:00 docker-hub nfacct.plugin[3955884]: heartbeat clock: woke up 1049852 microseconds later than expected (can be due to system load >
Apr 02 03:02:00 docker-hub apps.plugin[3955876]: heartbeat clock: woke up 1080922 microseconds later than expected (can be due to system load or>
Apr 02 03:31:30 docker-hub cgroup-name.sh[869883]: cgroup ‘user.slice_user-1000.slice_user_1000.service_init.scope’ is called 'user.slice_user-1>
Apr 02 03:31:30 docker-hub cgroup-name.sh[869887]: cgroup ‘user.slice_user-1000.slice_session-2420.scope’ is called 'user.slice_user-1000.slice_>

Another thing to note is that it seems to only happen to linux ones and they all seem to happen at the same time . (I have 3 duplicates for each of the linux machines).

Your node have been affected by a recent Agent bug.

We are working to address this as soon as possible.
Sorry for any inconvenience this may cause.

1 Like