Netdata Community

Netdata consuming high ram amount

Good news! I think ebpf is the plugin causing the memory issues.

I went through the docs and disabled each of these plugins one at a time:

#proc = no
#diskspace = no
#cgroups = no
#tc = no
#idlejitter = no
#fping = no
#ioping = no
#node.d = no
#python.d = no
#go.d = no
#apps = no
#perf = no
#charts.d = no
ebpf = no

Each time the memory would jump back up, but I noticed that the memory usage was flapping between too high and normal, shown below. I saw in htop that it was the ebpf plugin that was starting and stopping at the same time the memory usage was jumping. After disabling that, it has stayed at a normal level.


Yeah, ebpf was my suspect after checking your snapshot, i noticed that from time to time its cpu usage spikes to 95% (Applications). At the same time i saw a lot of do_fork calls (other dimensions) on the process (eBPF) chart.

We did some tests and discovered several problems with ebpf.plugin.

It contributes a lot to the kernel memory usage, indeed. That is why we see nothing in the ps output, it is not memory used by the processes.

The issue in the netdata repo: high kernel memory usage when using ebpf plugin · Issue #10949 · netdata/netdata · GitHub

@haydenseitz ebpf.plugin crashes on your server, those are the periods when memory usage drops.

Do you have cachestat collector enabled in the ebpf.d.conf?

We could make ebpf.plugin crashing under the following conditions:

  • cachestat collector enabled (ebpf.d.conf)
  • spawn processes every seconds (while true; do sleep 1; for i in $(seq 1 200); do sleep 1 & done; done)

Thanks a lot @haydenseitz for your effort in identifying this. Without your analysis, it would take considerably more time for us to root-cause this.

Very nice job! :heart:


I’ve not changed any settings related to ebpf(aside from disabling on most systems), so I would assume cachestat was at it’s default state

  • we added Percpu dimension to the mem.kernel chart (Memory->kernel->Memory Used by Kernel). ebpf.plugin needs this memory to create hashtables [PR]

Memory allocated to the percpu allocator used to back percpu allocations. This stat excludes the cost of metadata

  • and we significantly decreased the amount of used Percpu memory by default (103=>29.8 MB on my server) [PR]

I consider this case of Netdata consuming high ram fixed.

:exclamation: Keep in mind that those changes are not in the latest stable (v1.30.1), but in the master branch.

1 Like

@Lcarrillo @haydenseitz thanks for highlighting the problem and helping us to resolve it.

@Thiago_Marques_0 thanks for the fix :wink:

1 Like