Minimim memory usage for netdata?

I’m running netdata on 1GB VMs and I noticed that it uses 100MB resident memory, so 10%. I tried to limit it by setting the configuration

memory mode=dbengine
dbengine multihost disk space=512
page cache size=8

but that doesn’t seem to make a difference.

What is the least amount of resident memory that netdata can use?

hmm @Stelios_Fragkakis any idea on this one? Is page cache size 8 ok?

With dbengine, the memory usage will be pretty high. To lower it you’d want to reduce the dbengine multihost disk space a lot, but that affects the retention. See the calculator.

If you have a lot of 1GB VMs, I suggest setting very low retention on them and using the streaming protocol to replicate their data to a beefy central node. The same calculator will tell you what the requirements would be for that central node, based on the number of children, the metrics they collect and the retention you want. If the total is too high, you can have several different parents and connect different parts of your infra to each of them.

2 Likes

I tried it out and I’m very surprised at the memory and disk use for the central node. Storing 14 days of data for 10 nodes means 12.5GB disk and 3.2GB memory!

I can only store 3 days of data if I want to fit in a 1GB VM for the collector.

Is there no way to improve this? Also, I’m not interested in very granular data after a few days, is it not possible to prune the data so that after a few days the data is only for every 30s?

  • We’re actively working on a change to reduce the memory usage, it’s one of our top priorities.
  • Tiering (lower granularity as time goes by) is also high on the list.

Until these things happen, I strongly suggest utilizing streaming/replication, as I said in the previous comment. This is a good idea anyway, for high availability of the metrics.

1 Like