i have a question. does netdata work with aws ecs fargate?
i am creating a server with fargate, but netdata is constantly being stopped in container definition.
is there no configuration netdata with aws ecs fargate or am I doing something wrong?
Theoretically it should just work if you use our docker installation, but I expect the are some settings we have by default that would not work in a serverless environment:
we ask for read access to the host proc and sys
we mount a directory where we persist data and config
It’s possible to configure the installation in a way that doesn’t require any of that, but I think we need to take a step back. You mentioned a “server” which confused me a bit. Isn’t ECS supposed to be serverless? Just docker containers or k8s on top of servers you know nothing about?
Amazon offer a separate service (EKS) for Kubernetes. ECS is for orchestration of containers that run on your own EC2 instances (virtual or bare metal), but, specifically the ‘Fargate’ provisioning option within ECS is serverless as you thought (ie. Amazon manage the underlying EC2 servers on your behalf).
I suspect that @s.oezkan was using the term ‘server’ in the broad sense, e.g. the container will ultimately run, say, a web server application.
I too am evaluating a solution based on Fargate containers, and I’d very much like to use Netdata, though I’m new to both so I’d appreciate being pointed in the right direction regarding installation/configuration.
I’m confused about the idea of using Netdata’s regular Docker container, because my ultimate aim is to monitor the behavior/performance of our Node.js app’s containers, not the underlying host(s). Surely I need to have Netdata running inside each of my own Node.js containers? Can you ‘compose’ two Docker images into one? Or should I just copy the Netdata binary into our own Dockerfile and somehow run both processes when the container is launched?
I miss physical servers : (
Any help would be greatly appreciated!
We just had a discussion about this last week and @sashwathn is actually investigating exactly such a use case.
Some background first:
Everything that happens inside a docker container is in fact happening on the host system. e.g. you can see the processes running under docker when you ps on the host. A netdata running on the host (containerized or not) has access to all that information. But the only info we currently split out to show in separate charts under the container’s name (or id if the permissions are missing) are on cpu, mem, and network interfaces. You can of course configure collectors to e.g. httpcheck an endpoint provided by a node.js service on your container, but that will just appear as another section at the global level, not under the container.
One of the ideas we discussed last week was to create new Applications charts (based on proc) for each container and show them in each container’s section. But what about a collector like web_log that needs to read a log file, or something that may need to execute a command right on the container’s shell to get its data? That’s where this idea of two things running on the same container comes from. However, the official docker documentation says it’s possible, but kind of warns us against doing something like that.
I’m sure @Austin_Hemmelgarn will be able to offer even more valuable insights than mine.
I believe it’s possible to do such things even without a sidecar container