How to enable Postfix monitoring when using dockerized Netdata?

Suggested template:


I’m running v1.36.1. I can’t make a dockerized Netdata install to query Postfix data. The docs state:

The collector executes postqueue -p to get Postfix queue statistics.

This is assuming the command postqueue is available in the docker instance, but it is not. I’m getting this errors from the logs:

python.d ERROR: postfix[local] : Can’t locate “postqueue -p” binary
python.d INFO: plugin[main] : postfix[local] : check failed

From my host machine I’m able to run:

docker exec postfix-container postqueue -p

And I can see the queue.

Relevant docs you followed/actions you took to solve the issue

Environment/Browser/Agent’s version etc

Netdata v1.36.1 (Docker install without any proxy or special group access)

What I expected to happen

Well, the result is the expected one: the binary is not there, so it shouldn’t be found :stuck_out_tongue: . My understanding of sockets and proxies is a bit limited. I’ve read this docs and they seem to be related to the issue I have, but any additional guideline or confirmation on the next steps would be appreciated.

Another approach I was thinking of, was mounting/binding an executable shell script on the Netdata container named /bin/postqueue and the shell script would SSH to the Postfix container and run the same command. So that way I could trick Netdata to believe Postfix is running in the Netdata container. But this sounds like I’m overcomplicating things…

Even if it’s technically possible to make such a thing work, it goes against the whole purpose of containerization. Would you be willing to try and see what our generic prometheus collector gets you if you install a postfix prometheus exporter on your postfix image? We still have improvements to do on that collector and you won’t have out of the box alerts to work with, but you should be able to at least see the basic metrics without hacking your infrastructure.

Yes, I know I suggested a dirty approach :sweat_smile:

I didn’t know about the prometheus exporter. I guess you’re talking about this one: GitHub - kumina/postfix_exporter: A Prometheus exporter for Postfix.

I think I’ve managed to set it up. So this sets up a web server and I’m getting some results (partial example):

# HELP go_gc_duration_seconds A summary of the pause duration of garbage collection cycles.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 3.7714e-05
go_gc_duration_seconds{quantile="0.25"} 5.0565e-05
go_gc_duration_seconds{quantile="0.5"} 6.3988e-05
go_gc_duration_seconds{quantile="0.75"} 7.4942e-05
go_gc_duration_seconds{quantile="1"} 0.000141084
go_gc_duration_seconds_sum 0.011942519
go_gc_duration_seconds_count 182
# HELP go_goroutines Number of goroutines that currently exist.
# TYPE go_goroutines gauge
go_goroutines 10

Based on what you’re saying, I understand the collector you’re talking about, essentially consumes this information and provides some basic metrics. That should be a good starting point. Of course, alerts would be a strong plus, but if that’s something planned for a future release, I’m happy to wait for it :slight_smile:

Could you provide some instructions on how to set the collector up?

Thanks in advance.

See Prometheus endpoint monitoring with Netdata | Learn Netdata, it has everything you need to get started. You don’t need to wait for us for the alerts, you can Configure health alarms | Learn Netdata yourself. It’s not the most straightforward configuration, but the predefined alerts we already have under netdata/health/health.d at master · netdata/netdata · GitHub help a lot (you can also see and edit those files of preconfigured alerts in your local installation, just run ./edit-config in your config directory to see a list of everything you can configure).