system.load can not fetch

I can not fetch via a get request the system.load data.

I could during the previous days but when I reinstalled netdata via the script, I faced this error.

I installed netdata on a server and again the same issue.

Any solution for this? Is it a new bug?

Error log:

2022-10-27 11:15:37: netdata ERROR : MAIN : RRDCALC: from config ‘1m_tcp_syn_queue_drops’ on chart ‘ip.tcp_syn_queue’ failed to be added to host ‘petropoulakis’. It already exists.
2022-10-27 11:15:37: netdata ERROR : MAIN : RRDCALC: from config ‘1m_tcp_syn_queue_cookies’ on chart ‘ip.tcp_syn_queue’ failed to be added to host ‘petropoulakis’. It already exists.
2022-10-27 11:15:37: netdata ERROR : MAIN : RRDCALC: from config ‘load_cpu_number’ on chart ‘system.load’ failed to be added to host ‘petropoulakis’. It already exists. (errno 2, No such file or directory)
2022-10-27 11:15:37: netdata ERROR : MAIN : RRDCALC: from config ‘load_average_15’ on chart ‘system.load’ failed to be added to host ‘petropoulakis’. It already exists.
2022-10-27 11:15:37: netdata ERROR : MAIN : RRDCALC: from config ‘load_average_5’ on chart ‘system.load’ failed to be added to host ‘petropoulakis’. It already exists.
2022-10-27 11:15:37: netdata ERROR : MAIN : RRDCALC: from config ‘load_average_1’ on chart ‘system.load’ failed to be added to host ‘petropoulakis’. It already exists

Hi @Panos_Petrop !

When you say you can’t fetch the system.load data, what exactly do you mean? You do a wget or similar request on the api (/data) but it does not return something?

Will check the other error, not sure how it relates. (This basically means that an alert is already configured).

Hi, @Manolis_Vasilakis

yes with api request and it returns 404. I have not changed anything in my code, I reinstalled netdata only, but I can not read system.load. In my code, I can receive other metrics like cpu.

Hi, @Panos_Petrop :wave:

I can not fetch via a get request the system.load data.
I have not changed anything in my code

Can you show the exact command the code fragment you use to do the request?

Hi @ilyam8,

url = f"http://{host}/api/v1/data?chart=system.load&after={after}&before={before}&format=json"
        try:
            r = requests.get(url, verify=True)
            if(r.status_code != 200):
                print("Can not fetch load")
                exit()
        except requests.exceptions.RequestException as e:
            raise Exception("\n\nCan not fetch load \n\n" + str(e))

*after=-1, before=0

Sometimes, I can fetch the system.load, and sometimes not. If I run my script ~10 times, most of the times, the get command fails and I can not receive data from this metric. All the other metrics do not fail at all, e.g., system.cpu, system.ram, disk_space._. This happens only with system.load and it happened recently. I only reinstalled netdata via the script, and did not change the code. Noting, I reinstalled netdata multiple times, cleared all the files, etc. As said also, I installed from the beginning netdata to an external server, and this bug happends again when I run my script.

Try after=-6&before=0. The system.load chart update freq is 5 seconds. I will check if using after <= update_every is a bug or not.

1 Like

@ilyam8 appreciate your help!

This freq can be changed to a lower value with the conf file?

In addition, what is the freq of “prometheus_gitlab_runner_exporter_local.gitlab_runner_jobs” ? I experience the same thing.

Noting: to enable this metric, I just start the GitLab runner: gitlab-runner run --listen-address localhost:9252 , and then restart netdata.

Not for system.load. This metric comes from /proc/loadavg. The kernel calculates this once every 5 seconds, so 5 seconds is the minimum update_every for this metric.

In addition, what is the freq of “prometheus_gitlab_runner_exporter_local.gitlab_runner_jobs” ? I experience the same thing.

The default freq is 5 seconds.

This is a bug.
Fixed it at timeframe matching should take into account the update frequency of the chart by ktsaou · Pull Request #13911 · netdata/netdata · GitHub

Thank you for reporting this @Panos_Petrop !
You rock!

Thanks for the great support netdata team!

1 Like