Netdata alarm notifications not stopping even after updating "to: " field from sysadmin to silent

Environment

Netdata docker container v1.28.0

Problem

I am trying to disable some alarm notifications which I am getting because the behaviour is somewhat expected. E.g.
10min_cpu_usage.
I have updated the file using the edit-config and made “to: silent” for the 10min_cpu_usage alarm.

Contents of the /etc/netdata/health.d/cpu.conf

you can disable an alarm notification by setting the ‘to’ line to: silent

template: 10min_cpu_usage
on: system.cpu
os: linux
hosts: *
lookup: average -10m unaligned of user,system,softirq,irq,guest
units: %
every: 1m
warn: $this > (($status >= $WARNING) ? (75) : (85))
crit: $this > (($status == $CRITICAL) ? (85) : (95))
delay: down 15m multiplier 1.5 max 1h
info: average cpu utilization for the last 10 minutes (excluding iowait, nice and steal)
to: silent

template: 10min_cpu_iowait
on: system.cpu
os: linux
hosts: *
lookup: average -10m unaligned of iowait
units: %
every: 1m
warn: $this > (($status >= $WARNING) ? (20) : (40))
crit: $this > (($status == $CRITICAL) ? (40) : (50))
delay: down 15m multiplier 1.5 max 1h
info: average CPU wait I/O for the last 10 minutes
to: sysadmin

template: 20min_steal_cpu
on: system.cpu
os: linux
hosts: *
lookup: average -20m unaligned of steal
units: %
every: 5m
warn: $this > (($status >= $WARNING) ? (5) : (10))
crit: $this > (($status == $CRITICAL) ? (20) : (30))
delay: down 1h multiplier 1.5 max 2h
info: average CPU steal time for the last 20 minutes
to: sysadmin

FreeBSD

template: 10min_cpu_usage
on: system.cpu
os: freebsd
hosts: *
lookup: average -10m unaligned of user,system,interrupt
units: %
every: 1m
warn: $this > (($status >= $WARNING) ? (75) : (85))
crit: $this > (($status == $CRITICAL) ? (85) : (95))
delay: down 15m multiplier 1.5 max 1h
info: average cpu utilization for the last 10 minutes (excluding nice)
to: sysadmin

What I expected to happen

I expected the alarm notification mails for cpu to be stopped.

What is happening

I am still getting mails for this alarm.

strong text

Hi @rahul.sinha

You didn’t mention in the OP - have you restarted the container after the changes?

Hi @ilyam8 …sorry missed to mention… yes… after I did the changes, I did a netdatacli reload-health followed by a docker restart netdata_container.

1 Like

I’m seeing exactly the same thing. I’m not using a Docker container though, just running Netdata on vanilla Debian. I definitely restarted Netdata after editing the config.

What appears to be common between @rahul.sinha and me is that we’re both using Netdata Cloud - The screenshot looks like the Netdata Cloud email template rather than the standard Netdata one. Is it possible that cloud is not handling this to field in the config properly?

@Daniel indeed!

@Leonidas_Vrachnis could you help?

to: silent is the way to silence individual alarms, seems like Netdata Cloud doesnt respect it.

Hi guys, any help here please… got really stuck in this. Appreciate.

I’m seeing the same thing and am resorting to setting the thresholds to unrealistically high values.

1 Like

Indeed, the cloud doesn’t respect this config (actually is not even aware of this). Seems like an important feature that we are missing here, I will make sure product is aware.

2 Likes

@rahul.sinha @Daniel Currently the Cloud does not respect the “to: silent” configuration, we will improve this the upcoming weeks

Is there any ETA for this to be fixed in Netdata Cloud?

That’s a good idea as a temporary workaround. My workaround was adding an email filter that drops the emails.

1 Like

@Manos_Saratsis is there any ETA for this improvement? Anxious to roll Netdata out across our fleet, but can’t as we’re currently getting flooded with alerts in our testing.

1 Like

Hey @Ryan_S_Di_Francesco,

Thanks for circling back to this. Manos is currently OOF, but let me check with the product team and we will get back to you!

Hopefully, we will have it fixed by the end of the next week :crossed_fingers:

We have deployed the aforementioned enhancement.

Feel free to restart your agents and let us know if you have stopped receiving emails when the alert is configured with to: silent.

2 Likes

Thanks @harris for circling back so rapidly. Awesome!