I think we may have ran some stress tests of parents to see how much they can handle but not quite sure we have an easy answer here - i think it’s maybe one of those “it depends” ones.
Number of nodes, number of charts and dimensions per node can come into play a lot.
For example i am streaming about 30 of our production nodes to our own ml master we use for ml research type stuff and i have just recently bumped the parent to be a bigger machine and give about 60GB memory to get longer history stored on each host.
I’ve found the calculator here fairly useful as a rough guide: Change how long Netdata stores metrics | Learn Netdata
But tbh i’d say the best approach would be to try some experiments yourself, start with maybe 25 and see how you can scale up from say 50 hosts to 100 to 150 to 200 etc. It is a bit iterative i’d say specific a little to how much data is coming from each host and maybe any network specifics you might have.
@Stelios_Fragkakis @ilyam8 any thoughts?
@hamzash9500 would love to hear how you get on if you try run some experiments to see what works best.