Netdata Community

Any way to aggregate multple hosts without using Cloud?

Hey there.

Well probably a stupid question, since it’s clear that Netdata Cloud is proprietary (which I can understand), but…

Is there any way to get similar functionality without using it?
That is aggregating the metrics of many hosts… maybe with several groups of them… similar to what e.g. ganglia does (just a bit more modernish as netdata is).

In some areas sending any such data (and if it’s just hostnames or an indication what might be done on some nodes) to a 3rd party (even if they promise to not store anything) is simply not an option or even forbidden by law.


Hello @Philippe_Cerfon welcome to the community!

I had a sort of draft collector i was using myself (on a parent node) that might be useful for what you are asking:

We are working on a place in netdata/community repo where people can add third party collectors and make it easier for users to install them while at the same time not having to be officially supported by netdata.

Here is a little bit in the docs about third party collectors - i want to add a bit more to it and make it clearer and easier for how to install them.

Really, as best i understand it, it’s just a case of copying the file into the /usr/libexec/netdata/python.d/ folder where netdata is running. Likewise for the aggregator.conf file into /etc/netdata/python.d/ and editing as you want.

So something like this could work i think:

# first install any requirements or dependencies for the third party collector - check their docs.
# (in this case i dont think there are any the way i had built it)

# get collector code file
sudo wget "" -P /usr/libexec/netdata/python.d/

# get the conf file, which you can then edit as needed
sudo wget "" -P /etc/netdata/python.d/

# then restart netdata
sudo systemctl restart netdata

Feel free to use this as a starting point and maybe customise or build out your own collector that maybe adds more features you want etc.

I’m going to see if we can do the bit of work we need to have a more user friendly way to handle all this via a place in the netdata/community repo where people can add their collectors and then really clear instructions for users on how to use them if they want.

I will say though that the Overview screen in cloud on each room is much better than this collector as you can have different aggregations and its much more efficient and dynamic and more will be built on it.

But for cases where this just might not be an option then something like the aggregator collector i made above (or some improved version of it in Go maybe) could help solve some use cases people might have.

At some point in the future we will have an on-premise, paid version of Netdata cloud too. It’s going to take a while though.

@andrewm4894 Thanks… I’ll have a look at it.

It seems several companies were able to come up with good and sustainable business models by offering (open sourced) community editions and enterprise editions that offer additional features. (Just take something like gitlab)

Single users, or very small companies but also e.g. little research groups like in my case case might typically anyway look out for completely free/open versions.
Be it that funding is a problem for them, or that they have security concerns or strict data protection rules.

The agent alone might not be useful enough for them, but a community edition of “Netdata Cloud” could be fully enough.
While bigger companies that need enterprise features (like LDAP or AD integration, or maybe sensors for Oracle DB and so on) could use (and pay for :wink: ) an enterprise edition.

An on premise CE makes sense too, thanks for the great suggestion. He have a very ambitious roadmap already and expect that we will have a sufficiently high free user base in organizations without such concerns, to fuel the growth and maturity of the cloud offering into a product with a paid option as well. Of course if that doesn’t go as we expect, we know that there’s high demand for on premise as well. But for the immediate future, we are concentrating on adding to the cloud the features we know any such solution would require anyway. Once we prove we can provide the value we believe we can provide, the sky is the limit!

I hope this makes sense. We will soon publish some blog posts going into a lot of detail about the type of information that we do store in the cloud and expect that concerns in most use cases will be addressed. Of course there will always be organizations where any data outside the organization is a non starter, but that’s a much rarer case than the typical ones we generally encounter.