Prometheus metric grouping broken?

Problem/Question

I’ve configured netdata to monitor a prometheus endpoint with a large number of counters that I’d like to group. Either my config is wrong (likely) or the grouping configuration as described in the prometheus module docs no longer seems to work.

To me it looks like the grouping functionality was removed in this PR:
https://github.com/netdata/go.d.plugin/pull/1004

Relevant docs you followed/actions you took to solve the issue

README describing grouping config

Environment/Browser/Agent’s version etc

Running v1.37.0-89-nightly in a container

What I expected to happen

The prometheus endpoint returns data like the following:

unifipoller_device_cpu_utilization_ratio{name="U6-Lite",site_name="Default (default)",type="uap"} 0.042
unifipoller_device_cpu_utilization_ratio{name="UAP-AC-Pro",site_name="Default (default)",type="uap"} 0.085
unifipoller_device_cpu_utilization_ratio{name="US-8",site_name="Default (default)",type="usw"} 0.563
unifipoller_device_cpu_utilization_ratio{name="USG-Pro-4",site_name="Default (default)",type="ugw"} 0.07
unifipoller_device_cpu_utilization_ratio{name="USW-Lite-16-PoE",site_name="Default (default)",type="usw"} 0.10400000000000001

The module config is as follows:

jobs:                                                                                              
  - name: unifi_exporter                                                                                      
    url: 'http://localhost:9031/metrics'                                                               
    selector:                                                                                                           
      allow:                                                                                                  
        - unifipoller_device_cpu_utilization_ratio
    group:                                                                                                    
      - selector: unifipoller_device_cpu_utilization_ratio                                                                           
        by_label: site_name

What I expect/want is one chart with 5 lines (1 per “name”). What I get is 5 separate charts. Thanks for any help with my config or misunderstanding!

hi @crb welcome! apologies for delay here. i will read this and see if i can help you.

@crb are you looking at the chart from within Netdata Cloud or just the agent dashboard?

just reading the docs myself and i think your config looks correct as best i can tell.

@ilyam8 does the config above look correct to you?

I just tried to reproduce this with a collection from the demo node like below focusing on these metrics:

# HELP node_filesystem_files Filesystem total file nodes.
# TYPE node_filesystem_files gauge
node_filesystem_files{device="/dev/vda1",fstype="ext4",mountpoint="/"} 3.2256e+06
node_filesystem_files{device="/dev/vda15",fstype="vfat",mountpoint="/boot/efi"} 0
node_filesystem_files{device="lxcfs",fstype="fuse.lxcfs",mountpoint="/var/lib/lxcfs"} 0
node_filesystem_files{device="tmpfs",fstype="tmpfs",mountpoint="/run"} 126146
node_filesystem_files{device="tmpfs",fstype="tmpfs",mountpoint="/run/lock"} 126146

So if i group by fstype i’d expect 4.

jobs:

  - name: node_exporter_demo_fstype
    url: 'https://node.demo.do.prometheus.io/metrics'
    selector:
      allow:
        - node_filesystem_files
    group:
      - selector: node_filesystem_files
        by_label: fstype

In the charts endpoint of netdata i see 5 not 4 (so in line with same issue you have @crb )

@ilyam8 would i be right in expecting these two to be one chart? eg i want netdata to ignore the device and mountpoint labels.

prometheus_node_exporter_demo_fstype.node_filesystem_files-device=tmpfs-fstype=tmpfs-mountpoint=/run
prometheus_node_exporter_demo_fstype.node_filesystem_files-device=tmpfs-fstype=tmpfs-mountpoint=/run/lock
"prometheus_node_exporter_demo_fstype.node_filesystem_files-device=/dev/vda1-fstype=ext4-mountpoint=/": 		{
			"id": "prometheus_node_exporter_demo_fstype.node_filesystem_files-device=/dev/vda1-fstype=ext4-mountpoint=/",
			"name": "prometheus_node_exporter_demo_fstype.node_filesystem_files-device__dev_vda1-fstype_ext4-mountpoint__",
			"type": "prometheus_node_exporter_demo_fstype",
			"family": "node_filesystem",
			"context": "prometheus.node_exporter_demo_fstype.node_filesystem_files",
			"title": "Filesystem total file nodes (prometheus_node_exporter_demo_fstype.node_filesystem_files-device__dev_vda1-fstype_ext4-mountpoint__)",
			"priority": 70000,
			"plugin": "go.d",
			"module": "prometheus",
			"units": "files",
			"data_url": "/api/v1/data?chart=prometheus_node_exporter_demo_fstype.node_filesystem_files-device__dev_vda1-fstype_ext4-mountpoint__",
			"chart_type": "line",
			"duration": 10,
			"first_entry": 0,
			"last_entry": 0,
			"update_every": 10,
			"dimensions": {
				"node_filesystem_files": { "name": "node_filesystem_files" }
			},
			"chart_variables": {
			},
			"green": null,
			"red": null,
			"alarms": {

			},
			"chart_labels": {
				"_collect_plugin":"go.d",
				"_collect_module":"prometheus",
				"device":"/dev/vda1",
				"fstype":"ext4",
				"mountpoint":"/",
				"_collect_job":"node_exporter_demo_fstype"
			},
			"functions": {
			}
		},
		"prometheus_node_exporter_demo_fstype.node_filesystem_files-device=/dev/vda15-fstype=vfat-mountpoint=/boot/efi": 		{
			"id": "prometheus_node_exporter_demo_fstype.node_filesystem_files-device=/dev/vda15-fstype=vfat-mountpoint=/boot/efi",
			"name": "prometheus_node_exporter_demo_fstype.node_filesystem_files-device__dev_vda15-fstype_vfat-mountpoint__boot_efi",
			"type": "prometheus_node_exporter_demo_fstype",
			"family": "node_filesystem",
			"context": "prometheus.node_exporter_demo_fstype.node_filesystem_files",
			"title": "Filesystem total file nodes (prometheus_node_exporter_demo_fstype.node_filesystem_files-device__dev_vda15-fstype_vfat-mountpoint__boot_efi)",
			"priority": 70000,
			"plugin": "go.d",
			"module": "prometheus",
			"units": "files",
			"data_url": "/api/v1/data?chart=prometheus_node_exporter_demo_fstype.node_filesystem_files-device__dev_vda15-fstype_vfat-mountpoint__boot_efi",
			"chart_type": "line",
			"duration": 10,
			"first_entry": 0,
			"last_entry": 0,
			"update_every": 10,
			"dimensions": {
				"node_filesystem_files": { "name": "node_filesystem_files" }
			},
			"chart_variables": {
			},
			"green": null,
			"red": null,
			"alarms": {

			},
			"chart_labels": {
				"_collect_plugin":"go.d",
				"_collect_module":"prometheus",
				"device":"/dev/vda15",
				"fstype":"vfat",
				"mountpoint":"/boot/efi",
				"_collect_job":"node_exporter_demo_fstype"
			},
			"functions": {
			}
		},
		"prometheus_node_exporter_demo_fstype.node_filesystem_files-device=lxcfs-fstype=fuse.lxcfs-mountpoint=/var/lib/lxcfs": 		{
			"id": "prometheus_node_exporter_demo_fstype.node_filesystem_files-device=lxcfs-fstype=fuse.lxcfs-mountpoint=/var/lib/lxcfs",
			"name": "prometheus_node_exporter_demo_fstype.node_filesystem_files-device_lxcfs-fstype_fuse.lxcfs-mountpoint__var_lib_lxcfs",
			"type": "prometheus_node_exporter_demo_fstype",
			"family": "node_filesystem",
			"context": "prometheus.node_exporter_demo_fstype.node_filesystem_files",
			"title": "Filesystem total file nodes (prometheus_node_exporter_demo_fstype.node_filesystem_files-device_lxcfs-fstype_fuse.lxcfs-mountpoint__var_lib_lxcfs)",
			"priority": 70000,
			"plugin": "go.d",
			"module": "prometheus",
			"units": "files",
			"data_url": "/api/v1/data?chart=prometheus_node_exporter_demo_fstype.node_filesystem_files-device_lxcfs-fstype_fuse.lxcfs-mountpoint__var_lib_lxcfs",
			"chart_type": "line",
			"duration": 10,
			"first_entry": 0,
			"last_entry": 0,
			"update_every": 10,
			"dimensions": {
				"node_filesystem_files": { "name": "node_filesystem_files" }
			},
			"chart_variables": {
			},
			"green": null,
			"red": null,
			"alarms": {

			},
			"chart_labels": {
				"_collect_plugin":"go.d",
				"_collect_module":"prometheus",
				"device":"lxcfs",
				"fstype":"fuse.lxcfs",
				"mountpoint":"/var/lib/lxcfs",
				"_collect_job":"node_exporter_demo_fstype"
			},
			"functions": {
			}
		},
		"prometheus_node_exporter_demo_fstype.node_filesystem_files-device=tmpfs-fstype=tmpfs-mountpoint=/run": 		{
			"id": "prometheus_node_exporter_demo_fstype.node_filesystem_files-device=tmpfs-fstype=tmpfs-mountpoint=/run",
			"name": "prometheus_node_exporter_demo_fstype.node_filesystem_files-device_tmpfs-fstype_tmpfs-mountpoint__run",
			"type": "prometheus_node_exporter_demo_fstype",
			"family": "node_filesystem",
			"context": "prometheus.node_exporter_demo_fstype.node_filesystem_files",
			"title": "Filesystem total file nodes (prometheus_node_exporter_demo_fstype.node_filesystem_files-device_tmpfs-fstype_tmpfs-mountpoint__run)",
			"priority": 70000,
			"plugin": "go.d",
			"module": "prometheus",
			"units": "files",
			"data_url": "/api/v1/data?chart=prometheus_node_exporter_demo_fstype.node_filesystem_files-device_tmpfs-fstype_tmpfs-mountpoint__run",
			"chart_type": "line",
			"duration": 10,
			"first_entry": 0,
			"last_entry": 0,
			"update_every": 10,
			"dimensions": {
				"node_filesystem_files": { "name": "node_filesystem_files" }
			},
			"chart_variables": {
			},
			"green": null,
			"red": null,
			"alarms": {

			},
			"chart_labels": {
				"_collect_plugin":"go.d",
				"_collect_module":"prometheus",
				"device":"tmpfs",
				"fstype":"tmpfs",
				"mountpoint":"/run",
				"_collect_job":"node_exporter_demo_fstype"
			},
			"functions": {
			}
		},
		"prometheus_node_exporter_demo_fstype.node_filesystem_files-device=tmpfs-fstype=tmpfs-mountpoint=/run/lock": 		{
			"id": "prometheus_node_exporter_demo_fstype.node_filesystem_files-device=tmpfs-fstype=tmpfs-mountpoint=/run/lock",
			"name": "prometheus_node_exporter_demo_fstype.node_filesystem_files-device_tmpfs-fstype_tmpfs-mountpoint__run_lock",
			"type": "prometheus_node_exporter_demo_fstype",
			"family": "node_filesystem",
			"context": "prometheus.node_exporter_demo_fstype.node_filesystem_files",
			"title": "Filesystem total file nodes (prometheus_node_exporter_demo_fstype.node_filesystem_files-device_tmpfs-fstype_tmpfs-mountpoint__run_lock)",
			"priority": 70000,
			"plugin": "go.d",
			"module": "prometheus",
			"units": "files",
			"data_url": "/api/v1/data?chart=prometheus_node_exporter_demo_fstype.node_filesystem_files-device_tmpfs-fstype_tmpfs-mountpoint__run_lock",
			"chart_type": "line",
			"duration": 10,
			"first_entry": 0,
			"last_entry": 0,
			"update_every": 10,
			"dimensions": {
				"node_filesystem_files": { "name": "node_filesystem_files" }
			},
			"chart_variables": {
			},
			"green": null,
			"red": null,
			"alarms": {

			},
			"chart_labels": {
				"_collect_plugin":"go.d",
				"_collect_module":"prometheus",
				"device":"tmpfs",
				"fstype":"tmpfs",
				"mountpoint":"/run/lock",
				"_collect_job":"node_exporter_demo_fstype"
			},
			"functions": {
			}
		},

Yes, the grouping as it was - removed. Because it was the wrong implementation (as it turned out).

What I expect/want is one chart with 5 lines (1 per “name”).

That is possible when using Cloud UI (go to the chart, change “group by” to “name”). The local dashboard doesn’t support this at the moment.

oh yep - that solves my reproduction too - good to know